From 77f7bb08afff911b669a5285c4106932acbe9af9 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Mon, 11 Sep 2023 15:59:44 +0100 Subject: [PATCH] Version v1.64.0 --- MANUAL.html | 29365 ++++++++----- MANUAL.md | 5683 ++- MANUAL.txt | 30877 +++++++------ bin/make_manual.py | 1 + docs/content/azureblob.md | 7 +- docs/content/b2.md | 24 +- docs/content/box.md | 22 + docs/content/changelog.md | 134 + docs/content/commands/rclone.md | 34 +- docs/content/commands/rclone_bisync.md | 32 +- docs/content/commands/rclone_copy.md | 7 +- docs/content/commands/rclone_copyto.md | 7 +- docs/content/commands/rclone_mount.md | 31 +- docs/content/commands/rclone_move.md | 7 +- docs/content/commands/rclone_moveto.md | 7 +- docs/content/commands/rclone_ncdu.md | 1 + docs/content/commands/rclone_rmdirs.md | 5 +- docs/content/commands/rclone_selfupdate.md | 9 +- docs/content/commands/rclone_serve_dlna.md | 31 +- docs/content/commands/rclone_serve_docker.md | 31 +- docs/content/commands/rclone_serve_ftp.md | 31 +- docs/content/commands/rclone_serve_http.md | 31 +- docs/content/commands/rclone_serve_sftp.md | 31 +- docs/content/commands/rclone_serve_webdav.md | 31 +- docs/content/commands/rclone_sync.md | 7 +- docs/content/commands/rclone_test_info.md | 1 + docs/content/crypt.md | 2 +- docs/content/docs.md | 2 +- docs/content/drive.md | 30 +- docs/content/flags.md | 34 +- docs/content/ftp.md | 18 + docs/content/jottacloud.md | 67 + docs/content/local.md | 4 +- docs/content/mailru.md | 63 + docs/content/premiumizeme.md | 63 + docs/content/protondrive.md | 66 +- docs/content/putio.md | 67 + docs/content/rc.md | 54 +- docs/content/s3.md | 150 +- docs/content/sftp.md | 75 + docs/content/sharefile.md | 63 + rclone.1 | 38665 ++++++++--------- 42 files changed, 59648 insertions(+), 46222 deletions(-) diff --git a/MANUAL.html b/MANUAL.html index 7d68f62c5..73fda2731 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -81,7 +81,7 @@

rclone(1) User Manual

Nick Craig-Wood

-

Jun 30, 2023

+

Sep 11, 2023

Rclone syncs your files to cloud storage

rclone logo

@@ -95,7 +95,7 @@
  • Donate.
  • About rclone

    -

    Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

    +

    Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

    Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run protection. It is used at the command line, in scripts or via its API.

    Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".

    Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.

    @@ -168,6 +168,7 @@
  • IDrive e2
  • IONOS Cloud
  • Koofr
  • +
  • Leviia Object Storage
  • Liara Object Storage
  • Mail.ru Cloud
  • Memset Memstore
  • @@ -189,8 +190,10 @@
  • PikPak
  • premiumize.me
  • put.io
  • +
  • Proton Drive
  • QingStor
  • Qiniu Cloud Object Storage (Kodo)
  • +
  • Quatrix by Maytech
  • Rackspace Cloud Files
  • rsync.net
  • Scaleway
  • @@ -202,6 +205,7 @@
  • SMB / CIFS
  • StackPath
  • Storj
  • +
  • Synology
  • SugarSync
  • Tencent Cloud Object Storage (COS)
  • Uptobox
  • @@ -242,6 +246,7 @@

    See below for some expanded Linux / macOS / Windows instructions.

    See the usage docs for how to use rclone, or run rclone -h.

    Already installed rclone can be easily updated to the latest version using the rclone selfupdate command.

    +

    See the release signing docs for how to verify signatures on the release.

    Script installation

    To install rclone on Linux/macOS/BSD systems, run:

    sudo -v ; curl https://rclone.org/install.sh | sudo bash
    @@ -384,6 +389,21 @@ docker run --rm \ mount dropbox:Photos /data/mount & ls ~/data/mount kill %1 +

    Snap installation

    +

    Get it from the Snap Store

    +

    Make sure you have Snapd installed

    +
    $ sudo snap install rclone
    +

    Due to the strict confinement of Snap, rclone snap cannot acess real /home/$USER/.config/rclone directory, default config path is as below.

    + +

    Note: Due to the strict confinement of Snap, rclone mount feature is not supported.

    +

    If mounting is wanted, either install a precompiled binary or enable the relevant option when installing from source.

    +

    Note that this is controlled by community maintainer not the rclone developers so it may be out of date. Its current version is as below.

    +

    rclone

    Source installation

    Make sure you have git and Go installed. Go version 1.17 or newer is required, latest release is recommended. You can get it from your package manager, or download it from golang.org/dl. Then you can run the following:

    git clone https://github.com/rclone/rclone.git
    @@ -499,7 +519,9 @@ go build
  • PikPak
  • premiumize.me
  • put.io
  • +
  • Proton Drive
  • QingStor
  • +
  • Quatrix by Maytech
  • Seafile
  • SFTP
  • Sia
  • @@ -533,18 +555,20 @@ rclone sync --interactive /local/path remote:path # syncs /local/path to the rem

    Options

      -h, --help   help for config

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    The default number of parallel checks is 8. See the --checkers=N option for more information.

    rclone cryptcheck remote:path cryptedremote:path [flags]
    -

    Options

    +

    Options

          --combined string         Make a combined report of changes to this file
           --differ string           Report all non-matching files to this file
           --error string            Report all files with errors (hashing or reading) to this file
    @@ -1615,14 +2315,45 @@ if src is directory
           --missing-on-dst string   Report all files missing from the destination to this file
           --missing-on-src string   Report all files missing from the source to this file
           --one-way                 Check one way only, source files must exist on remote
    +

    Check Options

    +

    Flags used for rclone check.

    +
          --max-backlog int   Maximum number of objects in sync or check backlog (default 10000)
    +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone cryptdecode

    Cryptdecode returns unencrypted file names.

    -

    Synopsis

    +

    Synopsis

    rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    If you supply the --reverse flag, it will return encrypted file names.

    use it like this

    @@ -1631,34 +2362,39 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2

    Another way to accomplish this is by using the rclone backend encode (or decode) command. See the documentation on the crypt overlay for more info.

    rclone cryptdecode encryptedremote: encryptedfilename [flags]
    -

    Options

    +

    Options

      -h, --help      help for cryptdecode
           --reverse   Reverse cryptdecode, encrypts filenames

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone deletefile

    Remove a single file from remote.

    -

    Synopsis

    +

    Synopsis

    Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

    rclone deletefile remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for deletefile
    +

    Important Options

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone genautocomplete

    Output completion script for a given shell.

    -

    Synopsis

    +

    Synopsis

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    -

    Options

    +

    Options

      -h, --help   help for genautocomplete

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone genautocomplete bash

    Output bash completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a bash shell autocompletion script for rclone.

    This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete bash
    @@ -1676,16 +2412,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    If you supply a command line argument the script will be written there.

    If output_file is "-", then the output will be written to stdout.

    rclone genautocomplete bash [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for bash

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone genautocomplete fish

    Output fish completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a fish autocompletion script for rclone.

    This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete fish
    @@ -1694,16 +2430,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    If you supply a command line argument the script will be written there.

    If output_file is "-", then the output will be written to stdout.

    rclone genautocomplete fish [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for fish

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone genautocomplete zsh

    Output zsh completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a zsh autocompletion script for rclone.

    This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete zsh
    @@ -1712,28 +2448,28 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    If you supply a command line argument the script will be written there.

    If output_file is "-", then the output will be written to stdout.

    rclone genautocomplete zsh [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for zsh

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    rclone gendocs output_directory [flags]
    -

    Options

    +

    Options

      -h, --help   help for gendocs

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone hashsum

    Produces a hashsum file for all the objects in the path.

    -

    Synopsis

    +

    Synopsis

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.

    For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.

    @@ -1754,20 +2490,48 @@ Supported hashes are:
    $ rclone hashsum MD5 remote:path

    Note that hash names are case insensitive and values are output in lower case.

    rclone hashsum <hash> remote:path [flags]
    -

    Options

    +

    Options

          --base64               Output base64 encoded hashsum
       -C, --checkfile string     Validate hashes against a given SUM file instead of printing them
           --download             Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
       -h, --help                 help for hashsum
           --output-file string   Output hashsums to a file rather than the terminal
    +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone link

    Generate public link to file/folder.

    -

    Synopsis

    +

    Synopsis

    rclone link will create, retrieve or remove a public link to the given file or folder.

    rclone link remote:path/to/file
     rclone link remote:path/to/folder/
    @@ -1777,32 +2541,32 @@ rclone link --expire 1d remote:path/to/file

    Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

    rclone link remote:path [flags]
    -

    Options

    +

    Options

          --expire Duration   The amount of time that the link will be valid (default off)
       -h, --help              help for link
           --unlink            Remove existing public link to file/folder

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone listremotes

    List all the remotes in the config file and defined in environment variables.

    -

    Synopsis

    +

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    When used with the --long flag it lists the types too.

    rclone listremotes [flags]
    -

    Options

    +

    Options

      -h, --help   help for listremotes
           --long   Show the type as well as names

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone lsf

    List directories and objects in remote:path formatted for parsing.

    -

    Synopsis

    +

    Synopsis

    List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

    Eg

    $ rclone lsf swift:bucket
    @@ -1873,7 +2637,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path

    The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

    Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

    rclone lsf remote:path [flags]
    -

    Options

    +

    Options

          --absolute           Put a leading / in front of path names
           --csv                Output in CSV format
       -d, --dir-slash          Append a slash to directory names (default true)
    @@ -1884,14 +2648,42 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
    -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone lsjson

    List directories and objects in the path in JSON format.

    -

    Synopsis

    +

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this

    {
    @@ -1939,7 +2731,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path

    The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

    Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

    rclone lsjson remote:path [flags]
    -

    Options

    +

    Options

          --dirs-only               Show only directories in the listing
           --encrypted               Show the encrypted names
           --files-only              Show only files in the listing
    @@ -1952,14 +2744,42 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
    --original Show the ID of the underlying Object -R, --recursive Recurse into the listing --stat Just return the info for the pointed to file +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone mount

    Mount the remote as file system on a mountpoint.

    -

    Synopsis

    +

    Synopsis

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.

    @@ -2118,16 +2938,17 @@ WantedBy=multi-user.target

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -2211,7 +3032,7 @@ WantedBy=multi-user.target

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    rclone mount remote:path /path/to/mountpoint [flags]
    -

    Options

    +

    Options

          --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
           --allow-other                            Allow access to other users (not supported on Windows)
           --allow-root                             Allow access to root user (not supported on Windows)
    @@ -2244,6 +3065,7 @@ WantedBy=multi-user.target
    --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -2258,14 +3080,38 @@ WantedBy=multi-user.target --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --volname string Set the volume name (supported on Windows and OSX only) --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone moveto

    Move file or directory from source to dest.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it moves it to a file or directory named dest:path.

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.

    So

    @@ -2281,16 +3127,81 @@ if src is directory

    Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

    Note: Use the -P/--progress flag to view real-time transfer statistics.

    rclone moveto source:path dest:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for moveto
    +

    Copy Options

    +

    Flags for anything which can Copy a file.

    +
          --check-first                                 Do all the checks before starting transfers
    +  -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only).
    +      --compare-dest stringArray                    Include additional comma separated server-side paths during comparison
    +      --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
    +      --cutoff-mode string                          Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
    +      --ignore-case-sync                            Ignore case when synchronizing
    +      --ignore-checksum                             Skip post copy check of checksums
    +      --ignore-existing                             Skip all files that exist on destination
    +      --ignore-size                                 Ignore size when skipping use mod-time or checksum
    +  -I, --ignore-times                                Don't skip files that match size and time - transfer all files
    +      --immutable                                   Do not modify files, fail if existing files have been modified
    +      --inplace                                     Download directly to destination file instead of atomic download to temp/rename
    +      --max-backlog int                             Maximum number of objects in sync or check backlog (default 10000)
    +      --max-duration Duration                       Maximum duration rclone will transfer data for (default 0s)
    +      --max-transfer SizeSuffix                     Maximum size of data to transfer (default off)
    +  -M, --metadata                                    If set, preserve metadata when copying objects
    +      --modify-window Duration                      Max time diff to be considered the same (default 1ns)
    +      --multi-thread-chunk-size SizeSuffix          Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
    +      --multi-thread-cutoff SizeSuffix              Use multi-thread downloads for files above this size (default 256Mi)
    +      --multi-thread-streams int                    Number of streams to use for multi-thread downloads (default 4)
    +      --multi-thread-write-buffer-size SizeSuffix   In memory buffer size for writing when in multi-thread mode (default 128Ki)
    +      --no-check-dest                               Don't check the destination, copy regardless
    +      --no-traverse                                 Don't traverse destination file system on copy
    +      --no-update-modtime                           Don't update destination mod-time if files identical
    +      --order-by string                             Instructions on how to order the transfers, e.g. 'size,descending'
    +      --refresh-times                               Refresh the modtime of remote files
    +      --server-side-across-configs                  Allow server-side operations (e.g. copy) to work across different configs
    +      --size-only                                   Skip based on size only, not mod-time or checksum
    +      --streaming-upload-cutoff SizeSuffix          Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
    +  -u, --update                                      Skip files that are newer on the destination
    +

    Important Options

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)
    +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone ncdu

    Explore a remote with a text based user interface.

    -

    Synopsis

    +

    Synopsis

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:

    @@ -2310,6 +3221,7 @@ if src is directory y copy current path to clipboard Y display current path ^L refresh screen (fix screen corruption) + r recalculate file sizes ? to toggle help on and off q/ESC/^c to quit

    Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackets at end of line. These flags have the following meaning:

    @@ -2327,16 +3239,44 @@ if src is directory

    Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously.

    For a non-interactive listing of the remote, see the tree command. To just get the total size of the remote you can also use the size command.

    rclone ncdu remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for ncdu
    +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone obscure

    Obscure password for use in the rclone config file.

    -

    Synopsis

    +

    Synopsis

    In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.

    Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.

    This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.

    @@ -2344,16 +3284,16 @@ if src is directory

    If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

    If you want to encrypt the config file then please use config file encryption - see rclone config for more info.

    rclone obscure password [flags]
    -

    Options

    +

    Options

      -h, --help   help for obscure

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone rc

    Run a command against a running rclone.

    -

    Synopsis

    +

    Synopsis

    This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

    A username and password can be passed in with --user and --pass.

    Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

    @@ -2372,7 +3312,7 @@ if src is directory
    rclone rc --loopback operations/about fs=/

    Use rclone rc to see a list of all possible commands.

    rclone rc commands parameter [flags]
    -

    Options

    +

    Options

      -a, --arg stringArray   Argument placed in the "arg" array
       -h, --help              help for rc
           --json string       Input JSON - use instead of key=value args
    @@ -2383,13 +3323,13 @@ if src is directory
           --url string        URL to connect to rclone remote control (default "http://localhost:5572/")
           --user string       Username to use to rclone remote control

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone rcat

    Copies standard input to file on remote.

    -

    Synopsis

    +

    Synopsis

    rclone rcat reads from standard input (stdin) and copies it to a single remote file.

    echo "hello world" | rclone rcat remote:path/to/file
     ffmpeg - | rclone rcat remote:path/to/file
    @@ -2399,17 +3339,22 @@ ffmpeg - | rclone rcat remote:path/to/file

    --size should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size passed in then the transfer will likely fail.

    Note that the upload cannot be retried because the data is not stored. If the backend supports multipart uploading then individual chunks can be retried. If you need to transfer a lot of data, you may be better off caching it locally and then rclone move it to the destination which can use retries.

    rclone rcat remote:path [flags]
    -

    Options

    +

    Options

      -h, --help       help for rcat
           --size int   File size hint to preallocate (default -1)
    +

    Important Options

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone rcd

    Run rclone listening to remote control commands only.

    -

    Synopsis

    +

    Synopsis

    This runs rclone so that it only listens to remote control commands.

    This is useful if you are controlling rclone via the rc API.

    If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.

    @@ -2519,42 +3464,78 @@ htpasswd -B htpasswd anotherUser

    Use --rc-realm to set the authentication realm.

    Use --rc-salt to change the password hashing salt from the default.

    rclone rcd <path to files to serve>* [flags]
    -

    Options

    +

    Options

      -h, --help   help for rcd
    +

    RC Options

    +

    Flags to control the Remote Control API.

    +
          --rc                                 Enable the remote control server
    +      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default [localhost:5572])
    +      --rc-allow-origin string             Origin which cross-domain request (CORS) can be executed from
    +      --rc-baseurl string                  Prefix for URLs - leave blank for root
    +      --rc-cert string                     TLS PEM key (concatenation of certificate and CA certificate)
    +      --rc-client-ca string                Client certificate authority to verify clients with
    +      --rc-enable-metrics                  Enable prometheus metrics on /metrics
    +      --rc-files string                    Path to local files to serve on the HTTP server
    +      --rc-htpasswd string                 A htpasswd file - if not provided no authentication is done
    +      --rc-job-expire-duration Duration    Expire finished async jobs older than this value (default 1m0s)
    +      --rc-job-expire-interval Duration    Interval to check for expired async jobs (default 10s)
    +      --rc-key string                      TLS PEM Private key
    +      --rc-max-header-bytes int            Maximum size of request header (default 4096)
    +      --rc-min-tls-version string          Minimum TLS version that is acceptable (default "tls1.0")
    +      --rc-no-auth                         Don't require auth for certain methods
    +      --rc-pass string                     Password for authentication
    +      --rc-realm string                    Realm for authentication
    +      --rc-salt string                     Password hashing salt (default "dlPL2MqE")
    +      --rc-serve                           Enable the serving of remote objects
    +      --rc-server-read-timeout Duration    Timeout for server reading data (default 1h0m0s)
    +      --rc-server-write-timeout Duration   Timeout for server writing data (default 1h0m0s)
    +      --rc-template string                 User-specified template
    +      --rc-user string                     User name for authentication
    +      --rc-web-fetch-url string            URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
    +      --rc-web-gui                         Launch WebGUI on localhost
    +      --rc-web-gui-force-update            Force update to latest version of web gui
    +      --rc-web-gui-no-open-browser         Don't open the browser automatically
    +      --rc-web-gui-update                  Check and update to latest version of web gui

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone rmdirs

    Remove empty directories under the path.

    -

    Synopsis

    +

    Synopsis

    This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root flag.

    Use command rmdir to delete just the empty directory given by path, not recurse.

    This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs).

    -

    To delete a path and any objects in it, use purge command.

    +

    This will delete --checkers directories concurrently so if you have thousands of empty directories consider increasing this number.

    +

    To delete a path and any objects in it, use the purge command.

    rclone rmdirs remote:path [flags]
    -

    Options

    +

    Options

      -h, --help         help for rmdirs
           --leave-root   Do not remove root directory if empty
    +

    Important Options

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone selfupdate

    Update the rclone binary.

    -

    Synopsis

    -

    This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.

    +

    Synopsis

    +

    This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature; see the release signing docs for details.

    If used without flags (or with implied --stable flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta flag, i.e. rclone selfupdate --beta. You can check in advance what version would be installed by adding the --check flag, then repeat the command without it when you are satisfied.

    Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER (for example 1.53), the latest matching micro version will be used.

    Upon successful update rclone will print a message that contains a previous version number. You will need it if you later decide to revert your update for some reason. Then you'll have to note the previous version and run the following command: rclone selfupdate [--beta] OLDVER. If the old version contains only dots and digits (for example v1.54.0) then it's a stable release so you won't need the --beta flag. Beta releases have an additional information similar to v1.54.0-beta.5111.06f1c0c61. (if you are a developer and use a locally built rclone, the version number will end with -DEV, you will have to rebuild it as it obviously can't be distributed).

    If you previously installed rclone via a package manager, the package may include local documentation or configure services. You may wish to update with the flag --package deb or --package rpm (whichever is correct for your OS) to update these too. This command with the default --package zip will update only the rclone executable so the local manual may become inaccurate after it.

    -

    The rclone mount command (https://rclone.org/commands/rclone_mount/) may or may not support extended FUSE options depending on the build and OS. selfupdate will refuse to update if the capability would be discarded.

    +

    The rclone mount command may or may not support extended FUSE options depending on the build and OS. selfupdate will refuse to update if the capability would be discarded.

    Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.

    Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate" then you will need to update manually following the install instructions located at https://rclone.org/install/

    rclone selfupdate [flags]
    -

    Options

    +

    Options

          --beta             Install beta release
           --check            Check for latest release, do not download
       -h, --help             help for selfupdate
    @@ -2563,21 +3544,21 @@ htpasswd -B htpasswd anotherUser
    --stable Install stable release (this is the default) --version string Install the given rclone version (default: latest)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve

    Serve a remote over a protocol.

    -

    Synopsis

    +

    Synopsis

    Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.

    rclone serve http remote:

    Each subcommand has its own options which you can see in their help.

    rclone serve <protocol> [opts] <remote> [flags]
    -

    Options

    +

    Options

      -h, --help   help for serve

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve dlna

    Serve remote:path over DLNA

    -

    Synopsis

    +

    Synopsis

    Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

    Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.

    Server options

    @@ -2621,16 +3602,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -2714,7 +3696,7 @@ htpasswd -B htpasswd anotherUser

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    rclone serve dlna remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            The ip:port or :port to bind the DLNA http server to (default ":7879")
           --announce-interval Duration             The interval between SSDP announcements (default 12m0s)
           --dir-cache-time Duration                Time to cache directory entries for (default 5m0s)
    @@ -2734,6 +3716,7 @@ htpasswd -B htpasswd anotherUser
    --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -2746,14 +3729,38 @@ htpasswd -B htpasswd anotherUser --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve docker

    Serve any remote on docker's volume plugin API.

    -

    Synopsis

    +

    Synopsis

    This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.

    To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:

    sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
    @@ -2785,16 +3792,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -2878,7 +3886,7 @@ htpasswd -B htpasswd anotherUser

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    rclone serve docker [flags]
    -

    Options

    +

    Options

          --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
           --allow-other                            Allow access to other users (not supported on Windows)
           --allow-root                             Allow access to root user (not supported on Windows)
    @@ -2916,6 +3924,7 @@ htpasswd -B htpasswd anotherUser
    --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -2930,14 +3939,38 @@ htpasswd -B htpasswd anotherUser --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --volname string Set the volume name (supported on Windows and OSX only) --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve ftp

    Serve remote:path over FTP.

    -

    Synopsis

    +

    Synopsis

    Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.

    Server options

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    @@ -2969,16 +4002,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -3092,7 +4126,7 @@ htpasswd -B htpasswd anotherUser

    Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve ftp remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2121")
           --auth-proxy string                      A program to use to create the backend from the auth
           --cert string                            TLS PEM key (concatenation of certificate and CA certificate)
    @@ -3115,6 +4149,7 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication (default "anonymous") --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -3127,14 +4162,38 @@ htpasswd -B htpasswd anotherUser --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve http

    Serve the remote over HTTP.

    -

    Synopsis

    +

    Synopsis

    Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    The server will log errors. Use -v to see access logs.

    @@ -3267,16 +4326,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -3390,8 +4450,9 @@ htpasswd -B htpasswd anotherUser

    Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve http remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                       IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
    +      --allow-origin string                    Origin which cross-domain request (CORS) can be executed from
           --auth-proxy string                      A program to use to create the backend from the auth
           --baseurl string                         Prefix for URLs - leave blank for root
           --cert string                            TLS PEM key (concatenation of certificate and CA certificate)
    @@ -3421,6 +4482,7 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -3433,14 +4495,38 @@ htpasswd -B htpasswd anotherUser --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve restic

    Serve the remote for restic's REST API.

    -

    Synopsis

    +

    Synopsis

    Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    Restic is a command-line program for doing backups.

    The server will log errors. Use -v to see access logs.

    @@ -3506,8 +4592,9 @@ htpasswd -B htpasswd anotherUser

    Use --realm to set the authentication realm.

    Use --salt to change the password hashing salt from the default.

    rclone serve restic remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
    +      --allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --append-only                     Disallow deletion of repository data
           --baseurl string                  Prefix for URLs - leave blank for root
           --cache-objects                   Cache listed objects (default true)
    @@ -3527,13 +4614,13 @@ htpasswd -B htpasswd anotherUser
    --stdio Run an HTTP2 server on stdin/stdout --user string User name for authentication

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve sftp

    Serve the remote over SFTP.

    -

    Synopsis

    +

    Synopsis

    Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote.

    @@ -3572,16 +4659,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -3695,7 +4783,7 @@ htpasswd -B htpasswd anotherUser

    Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve sftp remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2022")
           --auth-proxy string                      A program to use to create the backend from the auth
           --authorized-keys string                 Authorized keys file (default "~/.ssh/authorized_keys")
    @@ -3718,6 +4806,7 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -3730,14 +4819,38 @@ htpasswd -B htpasswd anotherUser --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone serve webdav

    Serve remote:path over WebDAV.

    -

    Synopsis

    +

    Synopsis

    Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.

    WebDAV options

    --etag-hash

    @@ -3876,16 +4989,17 @@ htpasswd -B htpasswd anotherUser

    These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

    For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                   Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration         Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
    ---vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)
    +
    --cache-dir string                     Directory rclone will use for caching.
    +--vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +--vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +--vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +--vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +--vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +--vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    +

    If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

    The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .

    You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

    --vfs-cache-mode off

    @@ -3999,8 +5113,9 @@ htpasswd -B htpasswd anotherUser

    Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve webdav remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                       IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
    +      --allow-origin string                    Origin which cross-domain request (CORS) can be executed from
           --auth-proxy string                      A program to use to create the backend from the auth
           --baseurl string                         Prefix for URLs - leave blank for root
           --cert string                            TLS PEM key (concatenation of certificate and CA certificate)
    @@ -4032,6 +5147,7 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -4044,14 +5160,38 @@ htpasswd -B htpasswd anotherUser --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone settier

    Changes storage class/tier of objects in remote.

    -

    Synopsis

    +

    Synopsis

    rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.

    Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

    You can use it to tier single object

    @@ -4061,25 +5201,25 @@ htpasswd -B htpasswd anotherUser

    Or just provide remote directory and all files in directory will be tiered

    rclone settier tier remote:path/dir
    rclone settier tier remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for settier

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone test

    Run a test command

    -

    Synopsis

    +

    Synopsis

    Rclone test is used to run test commands.

    Select which test command you want with the subcommand, eg

    rclone test memory remote:

    Each subcommand has its own options which you can see in their help.

    NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.

    -

    Options

    +

    Options

      -h, --help   help for test

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    Note that value of --timestamp is in UTC. If you want local time then add the --localtime flag.

    rclone touch remote:path [flags]
    -

    Options

    +

    Options

      -h, --help               help for touch
           --localtime          Use localtime for timestamp, not UTC
       -C, --no-create          Do not create the file if it does not exist (implied with --recursive)
       -R, --recursive          Recursively touch all files
       -t, --timestamp string   Use specified time instead of the current time of day
    +

    Important Options

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)
    +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone tree

    List the contents of the remote in a tree like fashion.

    -

    Synopsis

    +

    Synopsis

    rclone tree lists the contents of a remote in a similar way to the unix tree command.

    For example

    $ rclone tree remote:path
    @@ -4226,7 +5400,7 @@ htpasswd -B htpasswd anotherUser

    The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with --size. Note that not all of them have short options as they conflict with rclone's short options.

    For a more interactive navigation of the remote see the ncdu command.

    rclone tree remote:path [flags]
    -

    Options

    +

    Options

      -a, --all             All files are listed (list . files too)
       -d, --dirs-only       List directories only
           --dirsfirst       List directories before files (-U disables)
    @@ -4246,8 +5420,36 @@ htpasswd -B htpasswd anotherUser
    -r, --sort-reverse Reverse the order of the sort -U, --unsorted Leave files unsorted --version Sort files alphanumerically by version +

    Filter Options

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing Options

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    @@ -4328,6 +5530,7 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dir

    Valid remote names

    Remote names are case sensitive, and must adhere to the following rules: - May contain number, letter, _, -, ., +, @ and space. - May not start with - or space. - May not end with space.

    Starting with rclone version 1.61, any Unicode numbers and letters are allowed, while in older versions it was limited to plain ASCII (0-9, A-Z, a-z). If you use the same rclone configuration from different shells, which may be configured with different character encoding, you must be cautious to use characters that are possible to write in all of them. This is mostly a problem on Windows, where the console traditionally uses a non-Unicode character set - defined by the so-called "code page".

    +

    Do not use single character names on Windows as it creates ambiguity with Windows drives' names, e.g.: remote called C is indistinguishable from C drive. Rclone will always assume that single letter name refers to a drive.

    Quoting and the shell

    When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.

    Here are some gotchas which may help users unfamiliar with the shell rules

    @@ -4472,7 +5675,7 @@ rclone sync --interactive /path/to/files remote:current-backup

    The metadata keys mtime and content-type will take precedence if supplied in the metadata over reading the Content-Type or modification time of the source object.

    Hashes are not included in system metadata as there is a well defined way of reading those already.

    -

    Options

    +

    Options

    Rclone has a number of options to control its behaviour.

    Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.

    Time or duration options

    @@ -4508,6 +5711,7 @@ rclone sync --interactive /path/to/files remote:current-backup

    See --compare-dest and --copy-dest.

    --bind string

    Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.

    +

    You can use --bind 0.0.0.0 to force rclone to use IPv4 addresses and --bind ::0 to force rclone to use IPv6 addresses.

    --bwlimit=BANDWIDTH_SPEC

    This option controls the bandwidth limit. For example

    --bwlimit 10M
    @@ -4789,51 +5993,47 @@ y/n/s/!/q> n

    You can use this command to disable recursion (with --max-depth 1).

    Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.

    --max-duration=TIME

    -

    Rclone will stop scheduling new transfers when it has run for the duration specified.

    -

    Defaults to off.

    -

    When the limit is reached any existing transfers will complete.

    -

    Rclone won't exit with an error if the transfer limit is reached.

    +

    Rclone will stop transferring when it has run for the duration specified. Defaults to off.

    +

    When the limit is reached all transfers will stop immediately. Use --cutoff-mode to modify this behaviour.

    +

    Rclone will exit with exit code 10 if the duration limit is reached.

    --max-transfer=SIZE

    Rclone will stop transferring when it has reached the size specified. Defaults to off.

    -

    When the limit is reached all transfers will stop immediately.

    +

    When the limit is reached all transfers will stop immediately. Use --cutoff-mode to modify this behaviour.

    Rclone will exit with exit code 8 if the transfer limit is reached.

    +

    --cutoff-mode=hard|soft|cautious

    +

    This modifies the behavior of --max-transfer and --max-duration Defaults to --cutoff-mode=hard.

    +

    Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

    +

    Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

    +

    Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit. Only applicable for --max-transfer

    -M, --metadata

    Setting this flag enables rclone to copy the metadata from the source to the destination. For local backends this is ownership, permissions, xattr etc. See the #metadata for more info.

    --metadata-set key=value

    Add metadata key = value when uploading. This can be repeated as many times as required. See the #metadata for more info.

    -

    --cutoff-mode=hard|soft|cautious

    -

    This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

    -

    Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

    -

    Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

    -

    Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

    --modify-window=TIME

    When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.

    The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.

    This command line flag allows you to override that computed default.

    --multi-thread-write-buffer-size=SIZE

    -

    When downloading with multiple threads, rclone will buffer SIZE bytes in memory before writing to disk for each thread.

    -

    This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so if you see downloads being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful.

    +

    When transferring with multiple threads, rclone will buffer SIZE bytes in memory before writing to disk for each thread.

    +

    This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so if you see transfers being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful.

    Nevertheless, the default of 128k should be fine for almost all use cases, so before changing it ensure that network is not really your bottleneck.

    As a final hint, size is not the only factor: block size (or similar concept) can have an impact. In one case, we observed that exact multiples of 16k performed much better than other values.

    -

    --multi-thread-cutoff=SIZE

    -

    When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M).

    -

    Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.

    -

    The number of threads used to download is controlled by --multi-thread-streams.

    +

    --multi-thread-chunk-size=SizeSuffix

    +

    Normally the chunk size for multi thread transfers is set by the backend. However some backends such as local and smb (which implement OpenWriterAt but not OpenChunkWriter) don't have a natural chunk size.

    +

    In this case the value of this option is used (default 64Mi).

    +

    --multi-thread-cutoff=SIZE

    +

    When transferring files above SIZE to capable backends, rclone will use multiple threads to transfer the file (default 256M).

    +

    Capable backends are marked in the overview as MultithreadUpload. (They need to implement either the OpenWriterAt or OpenChunkedWriter internal interfaces). These include include, local, s3, azureblob, b2, oracleobjectstorage and smb at the time of writing.

    +

    On the local disk, rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.

    +

    The number of threads used to transfer is controlled by --multi-thread-streams.

    Use -vv if you wish to see info about the threads.

    -

    This will work with the sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

    -

    NB that this only works for a local destination but will work with any source.

    -

    NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly.

    -

    NB on Windows using multi-thread downloads will cause the resulting files to be sparse. Use --local-no-sparse to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0

    +

    This will work with the sync/copy/move commands and friends copyto/moveto. Multi thread transfers will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

    +

    NB that this only works with supported backends as the destination but will work with any backend as the source.

    +

    NB that multi-thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly.

    +

    NB on Windows using multi-thread transfers to the local disk will cause the resulting files to be sparse. Use --local-no-sparse to disable sparse files (which may cause long delays at the start of transfers) or disable multi-thread transfers with --multi-thread-streams 0

    --multi-thread-streams=N

    -

    When using multi thread downloads (see above --multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads (Default 4).

    -

    Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams.

    -

    So if --multi-thread-cutoff 250M and --multi-thread-streams 4 are in effect (the defaults):

    - +

    When using multi thread transfers (see above --multi-thread-cutoff) this sets the number of streams to use. Set to 0 to disable multi thread transfers (Default 4).

    +

    If the backend has a --backend-upload-concurrency setting (eg --s3-upload-concurrency) then this setting will be used as the number of transfers instead if it is larger than the value of --multi-thread-streams or --multi-thread-streams isn't set.

    --no-check-dest

    The --no-check-dest can be used with move or copy and it causes rclone not to check the destination at all when copying files.

    This means that:

    @@ -5210,10 +6410,11 @@ export RCLONE_CONFIG_PASS
  • 7 - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)
  • 8 - Transfer exceeded - limit set by --max-transfer reached
  • 9 - Operation successful, but no files transferred
  • +
  • 10 - Duration exceeded - limit set by --max-duration reached
  • Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    @@ -5223,7 +6424,7 @@ export RCLONE_CONFIG_PASS

    The options set by environment variables can be seen with the -vv flag, e.g. rclone version -vv.

    Config file

    You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.

    -

    To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase.

    +

    To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase. Note one implication here is the remote's name must be convertible into a valid environment variable name, so it can only contain letters, digits, or the _ (underscore) character.

    For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables):

    $ export RCLONE_CONFIG_MYS3_TYPE=s3
     $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
    @@ -5350,7 +6551,7 @@ y/n> y

    To test filters without risk of damage to data, apply them to rclone ls, or with the --dry-run and -vv flags.

    Rclone filter patterns can only be used in filter command line options, not in the specification of a remote.

    E.g. rclone copy "remote:dir*.jpg" /path/to/dir does not have a filter effect. rclone copy remote:dir /path/to/dir --include "*.jpg" does.

    -

    Important Avoid mixing any two of --include..., --exclude... or --filter... flags in an rclone command. The results may not be what you expect. Instead use a --filter... flag.

    +

    Important Avoid mixing any two of --include..., --exclude... or --filter... flags in an rclone command. The results might not be what you expect. Instead use a --filter... flag.

    Patterns for matching path/file names

    Pattern syntax

    Here is a formal definition of the pattern syntax, examples are below.

    @@ -5387,7 +6588,7 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:] /file.jpg - matches "file.jpg" in the root directory of the remote - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" -

    The top level of the remote may not be the top level of the drive.

    +

    The top level of the remote might not be the top level of the drive.

    E.g. for a Microsoft Windows local directory structure

    F:
     ├── bkp
    @@ -5616,7 +6817,7 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]
     

    --exclude has no effect when combined with --files-from or --files-from-raw flags.

    E.g. rclone ls remote: --exclude *.bak excludes all .bak files from listing.

    E.g. rclone size remote: "--exclude /dir/**" returns the total size of all files on remote: excluding those in root directory dir and sub directories.

    -

    E.g. on Microsoft Windows rclone ls remote: --exclude "*\[{JP,KR,HK}\]*" lists the files in remote: with [JP] or [KR] or [HK] in their name. Quotes prevent the shell from interpreting the \ characters.\ characters escape the [ and ] so an rclone filter treats them literally rather than as a character-range. The { and } define an rclone pattern list. For other operating systems single quotes are required ie rclone ls remote: --exclude '*\[{JP,KR,HK}\]*'

    +

    E.g. on Microsoft Windows rclone ls remote: --exclude "*\[{JP,KR,HK}\]*" lists the files in remote: without [JP] or [KR] or [HK] in their name. Quotes prevent the shell from interpreting the \ characters.\ characters escape the [ and ] so an rclone filter treats them literally rather than as a character-range. The { and } define an rclone pattern list. For other operating systems single quotes are required ie rclone ls remote: --exclude '*\[{JP,KR,HK}\]*'

    --exclude-from - Read exclude patterns from file

    Excludes path/file names from an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules.

    For an example exclude-file.txt:

    @@ -6027,7 +7228,7 @@ dir1/dir2/dir3/.ignore

    For example, if you wished to run a sync with the --checksum parameter, you would pass this parameter in your JSON blob.

    "_config":{"CheckSum": true}

    If using rclone rc this could be passed as

    -
    rclone rc operations/sync ... _config='{"CheckSum": true}'
    +
    rclone rc sync/sync ... _config='{"CheckSum": true}'

    Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.

    Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

    "_config":{"BufferSize": "42M"}
    @@ -6290,6 +7491,21 @@ OR
     }
     

    Authentication is required for this call.

    +

    core/du: Returns disk usage of a locally attached disk.

    +

    This returns the disk usage for the local directory passed in as dir.

    +

    If the directory is not passed in, it defaults to the directory pointed to by --cache-dir.

    + +

    Returns:

    +
    {
    +    "dir": "/",
    +    "info": {
    +        "Available": 361769115648,
    +        "Free": 361785892864,
    +        "Total": 982141468672
    +    }
    +}

    core/gc: Runs a garbage collection.

    This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.

    core/group-list: Returns list of stats.

    @@ -6341,6 +7557,10 @@ OR "lastError": last error string, "renames" : number of files renamed, "retryError": boolean showing whether there has been at least one non-NoRetryError, + "serverSideCopies": number of server side copies done, + "serverSideCopyBytes": number bytes server side copied, + "serverSideMoves": number of server side moves done, + "serverSideMoveBytes": number bytes server side moved, "speed": average speed in bytes per second since start of the group, "totalBytes": total number of bytes in the group, "totalChecks": total number of checks in the group, @@ -6466,7 +7686,8 @@ OR

    Parameters: None.

    Results:

    job/status: Reads the status of the job ID

    Parameters:

    @@ -6788,6 +8009,21 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache

    See the rmdirs command for more information on the above.

    Authentication is required for this call.

    +

    operations/settier: Changes storage tier or class on all files in the path

    +

    This takes the following parameters:

    + +

    See the settier command for more information on the above.

    +

    Authentication is required for this call.

    +

    operations/settierfile: Changes storage tier or class on the single file pointed to

    +

    This takes the following parameters:

    + +

    See the settierfile command for more information on the above.

    +

    Authentication is required for this call.

    operations/size: Count the number of bytes and files in remote

    This takes the following parameters:

    @@ -7445,6 +8684,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - +Proton Drive +SHA1 +R/W +No +No +R +- + + QingStor MD5 - ⁹ @@ -7453,6 +8701,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - + +Quatrix by Maytech +- +R/W +No +No +- +- + Seafile - @@ -7557,7 +8814,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    Notes

    ¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the 4 MiB block SHA256s.

    ² SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

    -

    ³ WebDAV supports hashes when used with Fastmail Files. Owncloud and Nextcloud only.

    +

    ³ WebDAV supports hashes when used with Fastmail Files, Owncloud and Nextcloud only.

    ⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and Nextcloud only.

    QuickXorHash is Microsoft's own hash.

    ⁶ Mail.ru uses its own modified SHA1 hash

    @@ -8020,7 +9277,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    See the metadata docs for more info.

    Optional Features

    All rclone remotes support a base command set. Other features depend upon backend-specific capabilities.

    - +
    ++++++++++++++ @@ -8031,6 +9302,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8046,6 +9318,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8059,6 +9332,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8072,6 +9346,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8085,6 +9360,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8098,6 +9374,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8111,6 +9388,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8124,6 +9402,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8137,6 +9416,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8150,6 +9430,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8163,6 +9444,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8176,6 +9458,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8189,6 +9472,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8202,6 +9486,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8215,6 +9500,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8228,6 +9514,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8241,6 +9528,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8254,6 +9542,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8267,6 +9556,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8280,6 +9570,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8293,6 +9584,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8306,6 +9598,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8319,6 +9612,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8332,6 +9626,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8345,6 +9640,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8358,6 +9654,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8371,6 +9668,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8384,6 +9682,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8397,6 +9696,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8410,6 +9710,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8423,6 +9724,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8436,11 +9738,26 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + + + @@ -8449,10 +9766,25 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + + + @@ -8462,6 +9794,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8475,6 +9808,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8488,6 +9822,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8501,6 +9836,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8514,6 +9850,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8527,6 +9864,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8540,6 +9878,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8553,6 +9892,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8566,6 +9906,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8579,6 +9920,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8592,6 +9934,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + @@ -8619,6 +9962,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

    StreamUpload

    Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

    +

    MultithreadUpload

    +

    Some remotes allow transfers to the remote to be sent as chunks in parallel. If this is supported then rclone will use multi-thread copying to transfer files much faster.

    LinkSharing

    Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

    About

    @@ -8629,174 +9974,211 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    EmptyDir

    The remote supports empty directories. See Limitations for details. Most Object/Bucket-based remotes do not support this.

    Global Flags

    -

    This describes the global flags available to every rclone command split into two groups, non backend and backend flags.

    -

    Non Backend Flags

    -

    These flags are available for every command.

    -
          --ask-password                                Allow prompt for password for encrypted configuration (default true)
    -      --auto-confirm                                If enabled, do not request console confirmation
    -      --backup-dir string                           Make backups into hierarchy based in DIR
    -      --bind string                                 Local address to bind to for outgoing connections, IPv4, IPv6 or name
    -      --buffer-size SizeSuffix                      In memory buffer size when reading files for each --transfer (default 16Mi)
    -      --bwlimit BwTimetable                         Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
    -      --bwlimit-file BwTimetable                    Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
    -      --ca-cert stringArray                         CA certificate used to verify servers
    -      --cache-dir string                            Directory rclone will use for caching (default "$HOME/.cache/rclone")
    -      --check-first                                 Do all the checks before starting transfers
    -      --checkers int                                Number of checkers to run in parallel (default 8)
    -  -c, --checksum                                    Skip based on checksum (if available) & size, not mod-time & size
    -      --client-cert string                          Client SSL certificate (PEM) for mutual TLS auth
    -      --client-key string                           Client SSL private key (PEM) for mutual TLS auth
    -      --color string                                When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
    +

    This describes the global flags available to every rclone command split into groups.

    +

    Copy

    +

    Flags for anything which can Copy a file.

    +
          --check-first                                 Do all the checks before starting transfers
    +  -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only).
           --compare-dest stringArray                    Include additional comma separated server-side paths during comparison
    -      --config string                               Config file (default "$HOME/.config/rclone/rclone.conf")
    -      --contimeout Duration                         Connect timeout (default 1m0s)
           --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
    -      --cpuprofile string                           Write cpu profile to file
           --cutoff-mode string                          Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
    -      --default-time Time                           Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    -      --delete-after                                When synchronizing, delete files on destination after transferring (default)
    -      --delete-before                               When synchronizing, delete files on destination before transferring
    -      --delete-during                               When synchronizing, delete files during transfer
    -      --delete-excluded                             Delete files on dest excluded from sync
    -      --disable string                              Disable a comma separated list of features (use --disable help to see a list)
    -      --disable-http-keep-alives                    Disable HTTP keep-alives and use each connection once.
    -      --disable-http2                               Disable HTTP/2 in the global transport
    -  -n, --dry-run                                     Do a trial run with no permanent changes
    -      --dscp string                                 Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
    -      --dump DumpFlags                              List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
    -      --dump-bodies                                 Dump HTTP headers and bodies - may contain sensitive info
    -      --dump-headers                                Dump HTTP headers - may contain sensitive info
    -      --error-on-no-transfer                        Sets exit code 9 if no files are transferred, useful in scripts
    -      --exclude stringArray                         Exclude files matching pattern
    -      --exclude-from stringArray                    Read file exclude patterns from file (use - to read from stdin)
    -      --exclude-if-present stringArray              Exclude directories if filename is present
    -      --expect-continue-timeout Duration            Timeout when using expect / 100-continue in HTTP (default 1s)
    -      --fast-list                                   Use recursive list if available; uses more memory but fewer transactions
    -      --files-from stringArray                      Read list of source-file names from file (use - to read from stdin)
    -      --files-from-raw stringArray                  Read list of source-file names from file without any processing of lines (use - to read from stdin)
    -  -f, --filter stringArray                          Add a file filtering rule
    -      --filter-from stringArray                     Read file filtering patterns from a file (use - to read from stdin)
    -      --fs-cache-expire-duration Duration           Cache remotes for this long (0 to disable caching) (default 5m0s)
    -      --fs-cache-expire-interval Duration           Interval to check for expired remotes (default 1m0s)
    -      --header stringArray                          Set HTTP header for all transactions
    -      --header-download stringArray                 Set HTTP header for download transactions
    -      --header-upload stringArray                   Set HTTP header for upload transactions
    -      --human-readable                              Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
    -      --ignore-case                                 Ignore case in filters (case insensitive)
           --ignore-case-sync                            Ignore case when synchronizing
           --ignore-checksum                             Skip post copy check of checksums
    -      --ignore-errors                               Delete even if there are I/O errors
           --ignore-existing                             Skip all files that exist on destination
           --ignore-size                                 Ignore size when skipping use mod-time or checksum
       -I, --ignore-times                                Don't skip files that match size and time - transfer all files
           --immutable                                   Do not modify files, fail if existing files have been modified
    -      --include stringArray                         Include files matching pattern
    -      --include-from stringArray                    Read file include patterns from file (use - to read from stdin)
           --inplace                                     Download directly to destination file instead of atomic download to temp/rename
    -  -i, --interactive                                 Enable interactive mode
    -      --kv-lock-time Duration                       Maximum time to keep key-value database locked by process (default 1s)
    -      --log-file string                             Log everything to this file
    -      --log-format string                           Comma separated list of log format options (default "date,time")
    -      --log-level string                            Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
    -      --log-systemd                                 Activate systemd integration for the logger
    -      --low-level-retries int                       Number of low level retries to do (default 10)
    -      --max-age Duration                            Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
           --max-backlog int                             Maximum number of objects in sync or check backlog (default 10000)
    -      --max-delete int                              When synchronizing, limit the number of deletes (default -1)
    -      --max-delete-size SizeSuffix                  When synchronizing, limit the total size of deletes (default off)
    -      --max-depth int                               If set limits the recursion depth to this (default -1)
           --max-duration Duration                       Maximum duration rclone will transfer data for (default 0s)
    -      --max-size SizeSuffix                         Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    -      --max-stats-groups int                        Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
           --max-transfer SizeSuffix                     Maximum size of data to transfer (default off)
    -      --memprofile string                           Write memory profile to file
       -M, --metadata                                    If set, preserve metadata when copying objects
    -      --metadata-exclude stringArray                Exclude metadatas matching pattern
    -      --metadata-exclude-from stringArray           Read metadata exclude patterns from file (use - to read from stdin)
    -      --metadata-filter stringArray                 Add a metadata filtering rule
    -      --metadata-filter-from stringArray            Read metadata filtering patterns from a file (use - to read from stdin)
    -      --metadata-include stringArray                Include metadatas matching pattern
    -      --metadata-include-from stringArray           Read metadata include patterns from file (use - to read from stdin)
    -      --metadata-set stringArray                    Add metadata key=value when uploading
    -      --min-age Duration                            Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    -      --min-size SizeSuffix                         Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
           --modify-window Duration                      Max time diff to be considered the same (default 1ns)
    -      --multi-thread-cutoff SizeSuffix              Use multi-thread downloads for files above this size (default 250Mi)
    -      --multi-thread-streams int                    Max number of streams to use for multi-thread downloads (default 4)
    +      --multi-thread-chunk-size SizeSuffix          Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
    +      --multi-thread-cutoff SizeSuffix              Use multi-thread downloads for files above this size (default 256Mi)
    +      --multi-thread-streams int                    Number of streams to use for multi-thread downloads (default 4)
           --multi-thread-write-buffer-size SizeSuffix   In memory buffer size for writing when in multi-thread mode (default 128Ki)
    -      --no-check-certificate                        Do not verify the server SSL certificate (insecure)
           --no-check-dest                               Don't check the destination, copy regardless
    -      --no-console                                  Hide console window (supported on Windows only)
    -      --no-gzip-encoding                            Don't set Accept-Encoding: gzip
           --no-traverse                                 Don't traverse destination file system on copy
    -      --no-unicode-normalization                    Don't normalize unicode characters in filenames
           --no-update-modtime                           Don't update destination mod-time if files identical
           --order-by string                             Instructions on how to order the transfers, e.g. 'size,descending'
    -      --password-command SpaceSepList               Command for supplying password for encrypted configuration
    -  -P, --progress                                    Show progress during transfer
    -      --progress-terminal-title                     Show progress on the terminal title (requires -P/--progress)
    -  -q, --quiet                                       Print as little stuff as possible
    -      --rc                                          Enable the remote control server
    -      --rc-addr stringArray                         IPaddress:Port or :Port to bind server to (default [localhost:5572])
    -      --rc-allow-origin string                      Set the allowed origin for CORS
    -      --rc-baseurl string                           Prefix for URLs - leave blank for root
    -      --rc-cert string                              TLS PEM key (concatenation of certificate and CA certificate)
    -      --rc-client-ca string                         Client certificate authority to verify clients with
    -      --rc-enable-metrics                           Enable prometheus metrics on /metrics
    -      --rc-files string                             Path to local files to serve on the HTTP server
    -      --rc-htpasswd string                          A htpasswd file - if not provided no authentication is done
    -      --rc-job-expire-duration Duration             Expire finished async jobs older than this value (default 1m0s)
    -      --rc-job-expire-interval Duration             Interval to check for expired async jobs (default 10s)
    -      --rc-key string                               TLS PEM Private key
    -      --rc-max-header-bytes int                     Maximum size of request header (default 4096)
    -      --rc-min-tls-version string                   Minimum TLS version that is acceptable (default "tls1.0")
    -      --rc-no-auth                                  Don't require auth for certain methods
    -      --rc-pass string                              Password for authentication
    -      --rc-realm string                             Realm for authentication
    -      --rc-salt string                              Password hashing salt (default "dlPL2MqE")
    -      --rc-serve                                    Enable the serving of remote objects
    -      --rc-server-read-timeout Duration             Timeout for server reading data (default 1h0m0s)
    -      --rc-server-write-timeout Duration            Timeout for server writing data (default 1h0m0s)
    -      --rc-template string                          User-specified template
    -      --rc-user string                              User name for authentication
    -      --rc-web-fetch-url string                     URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
    -      --rc-web-gui                                  Launch WebGUI on localhost
    -      --rc-web-gui-force-update                     Force update to latest version of web gui
    -      --rc-web-gui-no-open-browser                  Don't open the browser automatically
    -      --rc-web-gui-update                           Check and update to latest version of web gui
           --refresh-times                               Refresh the modtime of remote files
    -      --retries int                                 Retry operations this many times if they fail (default 3)
    -      --retries-sleep Duration                      Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
           --server-side-across-configs                  Allow server-side operations (e.g. copy) to work across different configs
           --size-only                                   Skip based on size only, not mod-time or checksum
    -      --stats Duration                              Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
    -      --stats-file-name-length int                  Max file name length in stats (0 for no limit) (default 45)
    -      --stats-log-level string                      Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
    -      --stats-one-line                              Make the stats fit on one line
    -      --stats-one-line-date                         Enable --stats-one-line and add current date/time prefix
    -      --stats-one-line-date-format string           Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
    -      --stats-unit string                           Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
           --streaming-upload-cutoff SizeSuffix          Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
    -      --suffix string                               Suffix to add to changed files
    -      --suffix-keep-extension                       Preserve the extension when using --suffix
    -      --syslog                                      Use Syslog for logging
    -      --syslog-facility string                      Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
    -      --temp-dir string                             Directory rclone will use for temporary files (default "/tmp")
    -      --timeout Duration                            IO idle timeout (default 5m0s)
    -      --tpslimit float                              Limit HTTP transactions per second to this
    -      --tpslimit-burst int                          Max burst of transactions for --tpslimit (default 1)
    -      --track-renames                               When synchronizing, track file renames and do a server-side move if possible
    -      --track-renames-strategy string               Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
    -      --transfers int                               Number of file transfers to run in parallel (default 4)
    -  -u, --update                                      Skip files that are newer on the destination
    -      --use-cookies                                 Enable session cookiejar
    -      --use-json-log                                Use json log format
    -      --use-mmap                                    Use mmap allocator (see docs)
    -      --use-server-modtime                          Use server modified time instead of object metadata
    -      --user-agent string                           Set the user-agent to a specified string (default "rclone/v1.63.0")
    -  -v, --verbose count                               Print lots more stuff (repeat for more)
    -

    Backend Flags

    -

    These flags are available for every command. They control the backends and may be set in the config file.

    + -u, --update Skip files that are newer on the destination
    +

    Sync

    +

    Flags just used for rclone sync.

    +
          --backup-dir string               Make backups into hierarchy based in DIR
    +      --delete-after                    When synchronizing, delete files on destination after transferring (default)
    +      --delete-before                   When synchronizing, delete files on destination before transferring
    +      --delete-during                   When synchronizing, delete files during transfer
    +      --ignore-errors                   Delete even if there are I/O errors
    +      --max-delete int                  When synchronizing, limit the number of deletes (default -1)
    +      --max-delete-size SizeSuffix      When synchronizing, limit the total size of deletes (default off)
    +      --suffix string                   Suffix to add to changed files
    +      --suffix-keep-extension           Preserve the extension when using --suffix
    +      --track-renames                   When synchronizing, track file renames and do a server-side move if possible
    +      --track-renames-strategy string   Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
    +

    Important

    +

    Important flags useful for most commands.

    +
      -n, --dry-run         Do a trial run with no permanent changes
    +  -i, --interactive     Enable interactive mode
    +  -v, --verbose count   Print lots more stuff (repeat for more)
    +

    Check

    +

    Flags used for rclone check.

    +
          --max-backlog int   Maximum number of objects in sync or check backlog (default 10000)
    +

    Networking

    +

    General networking and HTTP stuff.

    +
          --bind string                        Local address to bind to for outgoing connections, IPv4, IPv6 or name
    +      --bwlimit BwTimetable                Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
    +      --bwlimit-file BwTimetable           Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
    +      --ca-cert stringArray                CA certificate used to verify servers
    +      --client-cert string                 Client SSL certificate (PEM) for mutual TLS auth
    +      --client-key string                  Client SSL private key (PEM) for mutual TLS auth
    +      --contimeout Duration                Connect timeout (default 1m0s)
    +      --disable-http-keep-alives           Disable HTTP keep-alives and use each connection once.
    +      --disable-http2                      Disable HTTP/2 in the global transport
    +      --dscp string                        Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
    +      --expect-continue-timeout Duration   Timeout when using expect / 100-continue in HTTP (default 1s)
    +      --header stringArray                 Set HTTP header for all transactions
    +      --header-download stringArray        Set HTTP header for download transactions
    +      --header-upload stringArray          Set HTTP header for upload transactions
    +      --no-check-certificate               Do not verify the server SSL certificate (insecure)
    +      --no-gzip-encoding                   Don't set Accept-Encoding: gzip
    +      --timeout Duration                   IO idle timeout (default 5m0s)
    +      --tpslimit float                     Limit HTTP transactions per second to this
    +      --tpslimit-burst int                 Max burst of transactions for --tpslimit (default 1)
    +      --use-cookies                        Enable session cookiejar
    +      --user-agent string                  Set the user-agent to a specified string (default "rclone/v1.64.0")
    +

    Performance

    +

    Flags helpful for increasing performance.

    +
          --buffer-size SizeSuffix   In memory buffer size when reading files for each --transfer (default 16Mi)
    +      --checkers int             Number of checkers to run in parallel (default 8)
    +      --transfers int            Number of file transfers to run in parallel (default 4)
    +

    Config

    +

    General configuration of rclone.

    +
          --ask-password                        Allow prompt for password for encrypted configuration (default true)
    +      --auto-confirm                        If enabled, do not request console confirmation
    +      --cache-dir string                    Directory rclone will use for caching (default "$HOME/.cache/rclone")
    +      --color string                        When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
    +      --config string                       Config file (default "$HOME/.config/rclone/rclone.conf")
    +      --default-time Time                   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --disable string                      Disable a comma separated list of features (use --disable help to see a list)
    +  -n, --dry-run                             Do a trial run with no permanent changes
    +      --error-on-no-transfer                Sets exit code 9 if no files are transferred, useful in scripts
    +      --fs-cache-expire-duration Duration   Cache remotes for this long (0 to disable caching) (default 5m0s)
    +      --fs-cache-expire-interval Duration   Interval to check for expired remotes (default 1m0s)
    +      --human-readable                      Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
    +  -i, --interactive                         Enable interactive mode
    +      --kv-lock-time Duration               Maximum time to keep key-value database locked by process (default 1s)
    +      --low-level-retries int               Number of low level retries to do (default 10)
    +      --no-console                          Hide console window (supported on Windows only)
    +      --no-unicode-normalization            Don't normalize unicode characters in filenames
    +      --password-command SpaceSepList       Command for supplying password for encrypted configuration
    +      --retries int                         Retry operations this many times if they fail (default 3)
    +      --retries-sleep Duration              Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
    +      --temp-dir string                     Directory rclone will use for temporary files (default "/tmp")
    +      --use-mmap                            Use mmap allocator (see docs)
    +      --use-server-modtime                  Use server modified time instead of object metadata
    +

    Debugging

    +

    Flags for developers.

    +
          --cpuprofile string   Write cpu profile to file
    +      --dump DumpFlags      List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
    +      --dump-bodies         Dump HTTP headers and bodies - may contain sensitive info
    +      --dump-headers        Dump HTTP headers - may contain sensitive info
    +      --memprofile string   Write memory profile to file
    +

    Filter

    +

    Flags for filtering directory listings.

    +
          --delete-excluded                     Delete files on dest excluded from sync
    +      --exclude stringArray                 Exclude files matching pattern
    +      --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
    +      --exclude-if-present stringArray      Exclude directories if filename is present
    +      --files-from stringArray              Read list of source-file names from file (use - to read from stdin)
    +      --files-from-raw stringArray          Read list of source-file names from file without any processing of lines (use - to read from stdin)
    +  -f, --filter stringArray                  Add a file filtering rule
    +      --filter-from stringArray             Read file filtering patterns from a file (use - to read from stdin)
    +      --ignore-case                         Ignore case in filters (case insensitive)
    +      --include stringArray                 Include files matching pattern
    +      --include-from stringArray            Read file include patterns from file (use - to read from stdin)
    +      --max-age Duration                    Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --max-depth int                       If set limits the recursion depth to this (default -1)
    +      --max-size SizeSuffix                 Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --min-age Duration                    Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
    +      --min-size SizeSuffix                 Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    +

    Listing

    +

    Flags for listing directories.

    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +      --fast-list           Use recursive list if available; uses more memory but fewer transactions
    +

    Logging

    +

    Logging and statistics.

    +
          --log-file string                     Log everything to this file
    +      --log-format string                   Comma separated list of log format options (default "date,time")
    +      --log-level string                    Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
    +      --log-systemd                         Activate systemd integration for the logger
    +      --max-stats-groups int                Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
    +  -P, --progress                            Show progress during transfer
    +      --progress-terminal-title             Show progress on the terminal title (requires -P/--progress)
    +  -q, --quiet                               Print as little stuff as possible
    +      --stats Duration                      Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
    +      --stats-file-name-length int          Max file name length in stats (0 for no limit) (default 45)
    +      --stats-log-level string              Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
    +      --stats-one-line                      Make the stats fit on one line
    +      --stats-one-line-date                 Enable --stats-one-line and add current date/time prefix
    +      --stats-one-line-date-format string   Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
    +      --stats-unit string                   Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
    +      --syslog                              Use Syslog for logging
    +      --syslog-facility string              Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
    +      --use-json-log                        Use json log format
    +  -v, --verbose count                       Print lots more stuff (repeat for more)
    +

    Metadata

    +

    Flags to control metadata.

    +
      -M, --metadata                            If set, preserve metadata when copying objects
    +      --metadata-exclude stringArray        Exclude metadatas matching pattern
    +      --metadata-exclude-from stringArray   Read metadata exclude patterns from file (use - to read from stdin)
    +      --metadata-filter stringArray         Add a metadata filtering rule
    +      --metadata-filter-from stringArray    Read metadata filtering patterns from a file (use - to read from stdin)
    +      --metadata-include stringArray        Include metadatas matching pattern
    +      --metadata-include-from stringArray   Read metadata include patterns from file (use - to read from stdin)
    +      --metadata-set stringArray            Add metadata key=value when uploading
    +

    RC

    +

    Flags to control the Remote Control API.

    +
          --rc                                 Enable the remote control server
    +      --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default [localhost:5572])
    +      --rc-allow-origin string             Origin which cross-domain request (CORS) can be executed from
    +      --rc-baseurl string                  Prefix for URLs - leave blank for root
    +      --rc-cert string                     TLS PEM key (concatenation of certificate and CA certificate)
    +      --rc-client-ca string                Client certificate authority to verify clients with
    +      --rc-enable-metrics                  Enable prometheus metrics on /metrics
    +      --rc-files string                    Path to local files to serve on the HTTP server
    +      --rc-htpasswd string                 A htpasswd file - if not provided no authentication is done
    +      --rc-job-expire-duration Duration    Expire finished async jobs older than this value (default 1m0s)
    +      --rc-job-expire-interval Duration    Interval to check for expired async jobs (default 10s)
    +      --rc-key string                      TLS PEM Private key
    +      --rc-max-header-bytes int            Maximum size of request header (default 4096)
    +      --rc-min-tls-version string          Minimum TLS version that is acceptable (default "tls1.0")
    +      --rc-no-auth                         Don't require auth for certain methods
    +      --rc-pass string                     Password for authentication
    +      --rc-realm string                    Realm for authentication
    +      --rc-salt string                     Password hashing salt (default "dlPL2MqE")
    +      --rc-serve                           Enable the serving of remote objects
    +      --rc-server-read-timeout Duration    Timeout for server reading data (default 1h0m0s)
    +      --rc-server-write-timeout Duration   Timeout for server writing data (default 1h0m0s)
    +      --rc-template string                 User-specified template
    +      --rc-user string                     User name for authentication
    +      --rc-web-fetch-url string            URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
    +      --rc-web-gui                         Launch WebGUI on localhost
    +      --rc-web-gui-force-update            Force update to latest version of web gui
    +      --rc-web-gui-no-open-browser         Don't open the browser automatically
    +      --rc-web-gui-update                  Check and update to latest version of web gui
    +

    Backend

    +

    Backend only flags. These can be set in the config file also.

          --acd-auth-url string                                 Auth server URL
           --acd-client-id string                                OAuth Client Id
           --acd-client-secret string                            OAuth Client Secret
    @@ -8822,8 +10204,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --azureblob-env-auth                                  Read credentials from runtime (environment variables, CLI or MSI)
           --azureblob-key string                                Storage Account Shared Key
           --azureblob-list-chunk int                            Size of blob list (default 5000)
    -      --azureblob-memory-pool-flush-time Duration           How often internal memory buffer pools will be flushed (default 1m0s)
    -      --azureblob-memory-pool-use-mmap                      Whether to use mmap buffers in internal memory pool
           --azureblob-msi-client-id string                      Object ID of the user-assigned MSI to use, if any
           --azureblob-msi-mi-res-id string                      Azure resource ID of the user-assigned MSI to use, if any
           --azureblob-msi-object-id string                      Object ID of the user-assigned MSI to use, if any
    @@ -8849,9 +10229,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --b2-endpoint string                                  Endpoint for the service
           --b2-hard-delete                                      Permanently delete files on remote removal, otherwise hide files
           --b2-key string                                       Application Key
    -      --b2-memory-pool-flush-time Duration                  How often internal memory buffer pools will be flushed (default 1m0s)
    -      --b2-memory-pool-use-mmap                             Whether to use mmap buffers in internal memory pool
           --b2-test-mode string                                 A flag string for X-Bz-Test-Mode header for debugging
    +      --b2-upload-concurrency int                           Concurrency for multipart uploads (default 16)
           --b2-upload-cutoff SizeSuffix                         Cutoff for switching to chunked upload (default 200Mi)
           --b2-version-at Time                                  Show file versions as they were at the specified time (default off)
           --b2-versions                                         Include old versions in directory listings
    @@ -8863,6 +10242,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --box-client-secret string                            OAuth Client Secret
           --box-commit-retries int                              Max number of times to try committing a multipart file (default 100)
           --box-encoding MultiEncoder                           The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
    +      --box-impersonate string                              Impersonate this user ID when using a service account
           --box-list-chunk int                                  Size of listing chunk 1-1000 (default 1000)
           --box-owned-by string                                 Only show items owned by the login (email address) passed in
           --box-root-folder-id string                           Fill in for rclone to use a non root folder as its starting point
    @@ -8922,6 +10302,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --drive-encoding MultiEncoder                         The encoding for the backend (default InvalidUtf8)
           --drive-env-auth                                      Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
           --drive-export-formats string                         Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
    +      --drive-fast-list-bug-fix                             Work around a bug in Google Drive listing (default true)
           --drive-formats string                                Deprecated: See export_formats
           --drive-impersonate string                            Impersonate this user when using a service account
           --drive-import-formats string                         Comma separated list of preferred formats for uploading Google docs
    @@ -8997,6 +10378,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --ftp-pass string                                     FTP password (obscured)
           --ftp-port int                                        FTP port number (default 21)
           --ftp-shut-timeout Duration                           Maximum time to wait for data connection closing status (default 1m0s)
    +      --ftp-socks-proxy string                              Socks 5 proxy host
           --ftp-tls                                             Use Implicit FTPS (FTP over TLS)
           --ftp-tls-cache-size int                              Size of TLS session cache for all control and data connections (default 32)
           --ftp-user string                                     FTP username (default "$USER")
    @@ -9065,10 +10447,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --internetarchive-front-endpoint string               Host of InternetArchive Frontend (default "https://archive.org")
           --internetarchive-secret-access-key string            IAS3 Secret Key (password)
           --internetarchive-wait-archive Duration               Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
    +      --jottacloud-auth-url string                          Auth server URL
    +      --jottacloud-client-id string                         OAuth Client Id
    +      --jottacloud-client-secret string                     OAuth Client Secret
           --jottacloud-encoding MultiEncoder                    The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
           --jottacloud-hard-delete                              Delete files permanently rather than putting them into the trash
           --jottacloud-md5-memory-limit SizeSuffix              Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
           --jottacloud-no-versions                              Avoid server side versioning by deleting files and recreating files instead of overwriting them
    +      --jottacloud-token string                             OAuth Access Token as a JSON blob
    +      --jottacloud-token-url string                         Token server url
           --jottacloud-trashed-only                             Only show files that are in the trash
           --jottacloud-upload-resume-limit SizeSuffix           Files bigger than this can be resumed if the upload fail's (default 10Mi)
           --koofr-encoding MultiEncoder                         The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
    @@ -9089,13 +10476,18 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --local-nounc                                         Disable UNC (long path names) conversion on Windows
           --local-unicode-normalization                         Apply unicode NFC normalization to paths and filenames
           --local-zero-size-links                               Assume the Stat size of links is zero (and read them instead) (deprecated)
    +      --mailru-auth-url string                              Auth server URL
           --mailru-check-hash                                   What should copy do if file checksum is mismatched or invalid (default true)
    +      --mailru-client-id string                             OAuth Client Id
    +      --mailru-client-secret string                         OAuth Client Secret
           --mailru-encoding MultiEncoder                        The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --mailru-pass string                                  Password (obscured)
           --mailru-speedup-enable                               Skip full upload if there is another file with same data hash (default true)
           --mailru-speedup-file-patterns string                 Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
           --mailru-speedup-max-disk SizeSuffix                  This option allows you to disable speedup (put by hash) for large files (default 3Gi)
           --mailru-speedup-max-memory SizeSuffix                Files larger than the size given below will always be hashed on disk (default 32Mi)
    +      --mailru-token string                                 OAuth Access Token as a JSON blob
    +      --mailru-token-url string                             Token server url
           --mailru-user string                                  User name (usually email)
           --mega-debug                                          Output more debug from Mega
           --mega-encoding MultiEncoder                          The encoding for the backend (default Slash,InvalidUtf8,Dot)
    @@ -9129,6 +10521,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --onedrive-server-side-across-configs                 Deprecated: use --server-side-across-configs instead
           --onedrive-token string                               OAuth Access Token as a JSON blob
           --onedrive-token-url string                           Token server url
    +      --oos-attempt-resume-upload                           If true attempt to resume previously started multipart upload for the object
           --oos-chunk-size SizeSuffix                           Chunk size to use for uploading (default 5Mi)
           --oos-compartment string                              Object storage compartment OCID
           --oos-config-file string                              Path to OCI config file (default "~/.oci/config")
    @@ -9138,7 +10531,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --oos-disable-checksum                                Don't store MD5 checksum with object metadata
           --oos-encoding MultiEncoder                           The encoding for the backend (default Slash,InvalidUtf8,Dot)
           --oos-endpoint string                                 Endpoint for Object storage API
    -      --oos-leave-parts-on-error                            If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
    +      --oos-leave-parts-on-error                            If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
    +      --oos-max-upload-parts int                            Maximum number of parts in a multipart upload (default 10000)
           --oos-namespace string                                Object storage namespace
           --oos-no-check-bucket                                 If set, don't attempt to check the bucket exists or create it
           --oos-provider string                                 Choose your Auth Provider (default "env_auth")
    @@ -9177,8 +10571,27 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --pikpak-trashed-only                                 Only show files that are in the trash
           --pikpak-use-trash                                    Send files to the trash instead of deleting permanently (default true)
           --pikpak-user string                                  Pikpak username
    +      --premiumizeme-auth-url string                        Auth server URL
    +      --premiumizeme-client-id string                       OAuth Client Id
    +      --premiumizeme-client-secret string                   OAuth Client Secret
           --premiumizeme-encoding MultiEncoder                  The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
    +      --premiumizeme-token string                           OAuth Access Token as a JSON blob
    +      --premiumizeme-token-url string                       Token server url
    +      --protondrive-2fa string                              The 2FA code
    +      --protondrive-app-version string                      The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
    +      --protondrive-enable-caching                          Caches the files and folders metadata to reduce API calls (default true)
    +      --protondrive-encoding MultiEncoder                   The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
    +      --protondrive-mailbox-password string                 The mailbox password of your two-password proton account (obscured)
    +      --protondrive-original-file-size                      Return the file size before encryption (default true)
    +      --protondrive-password string                         The password of your proton account (obscured)
    +      --protondrive-replace-existing-draft                  Create a new revision when filename conflict is detected
    +      --protondrive-username string                         The username of your proton account
    +      --putio-auth-url string                               Auth server URL
    +      --putio-client-id string                              OAuth Client Id
    +      --putio-client-secret string                          OAuth Client Secret
           --putio-encoding MultiEncoder                         The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
    +      --putio-token string                                  OAuth Access Token as a JSON blob
    +      --putio-token-url string                              Token server url
           --qingstor-access-key-id string                       QingStor Access Key ID
           --qingstor-chunk-size SizeSuffix                      Chunk size to use for uploading (default 4Mi)
           --qingstor-connection-retries int                     Number of connection retries (default 3)
    @@ -9189,6 +10602,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --qingstor-upload-concurrency int                     Concurrency for multipart uploads (default 1)
           --qingstor-upload-cutoff SizeSuffix                   Cutoff for switching to chunked upload (default 200Mi)
           --qingstor-zone string                                Zone to connect to
    +      --quatrix-api-key string                              API key for accessing Quatrix account
    +      --quatrix-effective-upload-time string                Wanted upload time for one chunk (default "4s")
    +      --quatrix-encoding MultiEncoder                       The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
    +      --quatrix-hard-delete                                 Delete files permanently rather than putting them into the trash
    +      --quatrix-host string                                 Host name of Quatrix account
    +      --quatrix-maximal-summary-chunk-size SizeSuffix       The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
    +      --quatrix-minimal-chunk-size SizeSuffix               The minimal size for one chunk (default 9.537Mi)
           --s3-access-key-id string                             AWS Access Key ID
           --s3-acl string                                       Canned ACL used when creating buckets and storing or copying objects
           --s3-bucket-acl string                                Canned ACL used when creating buckets
    @@ -9209,8 +10629,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --s3-list-version int                                 Version of ListObjects to use: 1,2 or 0 for auto
           --s3-location-constraint string                       Location constraint - must be set to match the Region
           --s3-max-upload-parts int                             Maximum number of parts in a multipart upload (default 10000)
    -      --s3-memory-pool-flush-time Duration                  How often internal memory buffer pools will be flushed (default 1m0s)
    -      --s3-memory-pool-use-mmap                             Whether to use mmap buffers in internal memory pool
           --s3-might-gzip Tristate                              Set this if the backend might gzip objects (default unset)
           --s3-no-check-bucket                                  If set, don't attempt to check the bucket exists or create it
           --s3-no-head                                          If set, don't HEAD uploaded objects to check integrity
    @@ -9276,14 +10694,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sftp-sha1sum-command string                         The command used to read sha1 hashes
           --sftp-shell-type string                              The type of SSH shell on remote server, if any
           --sftp-skip-links                                     Set to skip any symlinks and any other non regular files
    +      --sftp-socks-proxy string                             Socks 5 proxy host
    +      --sftp-ssh SpaceSepList                               Path and arguments to external ssh binary
           --sftp-subsystem string                               Specifies the SSH2 subsystem on the remote host (default "sftp")
           --sftp-use-fstat                                      If set use fstat instead of stat
           --sftp-use-insecure-cipher                            Enable the use of insecure ciphers and key exchange methods
           --sftp-user string                                    SSH username (default "$USER")
    +      --sharefile-auth-url string                           Auth server URL
           --sharefile-chunk-size SizeSuffix                     Upload chunk size (default 64Mi)
    +      --sharefile-client-id string                          OAuth Client Id
    +      --sharefile-client-secret string                      OAuth Client Secret
           --sharefile-encoding MultiEncoder                     The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
           --sharefile-endpoint string                           Endpoint for API calls
           --sharefile-root-folder-id string                     ID of the root folder
    +      --sharefile-token string                              OAuth Access Token as a JSON blob
    +      --sharefile-token-url string                          Token server url
           --sharefile-upload-cutoff SizeSuffix                  Cutoff for switching to multipart upload (default 128Mi)
           --sia-api-password string                             Sia Daemon API Password (obscured)
           --sia-api-url string                                  Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
    @@ -9627,10 +11052,16 @@ Optional Flags:
                                     If exceeded, the bisync run will abort. (default: 50%)
           --force                   Bypass `--max-delete` safety check and run the sync.
                                     Consider using with `--verbose`
    +      --create-empty-src-dirs   Sync creation and deletion of empty directories. 
    +                                  (Not compatible with --remove-empty-dirs)
           --remove-empty-dirs       Remove empty directories at the final cleanup step.
       -1, --resync                  Performs the resync run.
                                     Warning: Path1 files may overwrite Path2 versions.
                                     Consider using `--verbose` or `--dry-run` first.
    +      --ignore-listing-checksum Do not use checksums for listings 
    +                                  (add --ignore-checksum to additionally skip post-copy checksum checks)
    +      --resilient               Allow future runs to retry after certain less-serious errors, 
    +                                  instead of requiring --resync. Use at your own risk!
           --localtime               Use local time in listings (default: UTC)
           --no-cleanup              Retain working files (useful for troubleshooting and testing).
           --workdir PATH            Use custom working directory (useful for testing).
    @@ -9642,31 +11073,54 @@ Optional Flags:
     

    Arbitrary rclone flags may be specified on the bisync command line, for example rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.

    Paths

    Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path), Windows drive paths (with a drive letter and :) or configured remotes with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below).

    -

    Path1 and Path2 are treated equally, in that neither has priority for file changes, and access efficiency does not change whether a remote is on Path1 or Path2.

    +

    Path1 and Path2 are treated equally, in that neither has priority for file changes (except during --resync), and access efficiency does not change whether a remote is on Path1 or Path2.

    The listings in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1 and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst.

    -

    Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default. If the --remove-empty-dirs flag is specified, then both paths will have any empty directories purged as the last step in the process.

    +

    Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default, unless --create-empty-src-dirs is specified. If the --remove-empty-dirs flag is specified, then both paths will have ALL empty directories purged as the last step in the process.

    Command-line flags

    --resync

    -

    This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then sync the Path1 tree to Path2.

    -

    The base directories on the both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.

    -

    When using --resync, a newer version of a file either on Path1 or Path2 filesystem, will overwrite the file on the other path (only the last version will be kept). Carefully evaluate deltas using --dry-run.

    +

    This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.

    +

    The --resync sequence is roughly equivalent to:

    +
    rclone copy Path2 Path1 --ignore-existing
    +rclone copy Path1 Path2
    +

    Or, if using --create-empty-src-dirs:

    +
    rclone copy Path2 Path1 --ignore-existing
    +rclone copy Path1 Path2 --create-empty-src-dirs
    +rclone copy Path2 Path1 --create-empty-src-dirs
    +

    The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.

    +

    When using --resync, a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. (Note that this is NOT entirely symmetrical.) Carefully evaluate deltas using --dry-run.

    For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.

    For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path.

    --check-access

    -

    Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not generated automatically. For --check-accessto succeed, you must first either: A) Place one or more RCLONE_TEST files in the Path1 or Path2 filesystem and then do either a run without --check-access or a --resync to set matching files on both filesystems, or B) Set --check-filename to a filename already in use in various locations throughout your sync'd fileset. Time stamps and file contents are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the --check-filename flag.

    +

    Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not generated automatically. For --check-access to succeed, you must first either: A) Place one or more RCLONE_TEST files in both systems, or B) Set --check-filename to a filename already in use in various locations throughout your sync'd fileset. Recommended methods for A) include: * rclone touch Path1/RCLONE_TEST (create a new file) * rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST (copy an existing file) * rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include "RCLONE_TEST" (copy multiple files at once, recursively) * create the files manually (outside of rclone) * run bisync once without --check-access to set matching files on both filesystems will also work, but is not preferred, due to potential for user error (you are temporarily disabling the safety feature).

    +

    Note that --check-access is still enforced on --resync, so bisync --resync --check-access will not work as a method of initially setting the files (this is to ensure that bisync can't inadvertently circumvent its own safety switch.)

    +

    Time stamps and file contents for RCLONE_TEST files are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the --check-filename flag.

    --check-filename

    Name of the file(s) used in access health validation. The default --check-filename is RCLONE_TEST. One or more files having this filename must exist, synchronized between your source and destination filesets, in order for --check-access to succeed. See --check-access for additional details.

    --max-delete

    -

    As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.

    +

    As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync, either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.

    Also see the all files changed check.

    --filters-file

    By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.

    -

    If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which avoids existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.

    -

    To block this from happening bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next runs with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in .md5 file. If they don't match the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.

    +

    If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.

    +

    To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next run with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in the .md5 file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.

    --check-sync

    Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.

    Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false will disable it and may significantly reduce the sync run times for very large numbers of files.

    The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching.

    +

    See also: Concurrent modifications

    +

    --ignore-listing-checksum

    +

    By default, bisync will retrieve (or generate) checksums (for backends that support them) when creating the listings for both paths, and store the checksums in the listing files. --ignore-listing-checksum will disable this behavior, which may speed things up considerably, especially on backends (such as local) where hashes must be computed on the fly instead of retrieved. Please note the following:

    +
      +
    • While checksums are (by default) generated and stored in the listing files, they are NOT currently used for determining diffs (deltas). It is anticipated that full checksum support will be added in a future version.
    • +
    • --ignore-listing-checksum is NOT the same as --ignore-checksum, and you may wish to use one or the other, or both. In a nutshell: --ignore-listing-checksum controls whether checksums are considered when scanning for diffs, while --ignore-checksum controls whether checksums are considered during the copy/sync operations that follow, if there ARE diffs.
    • +
    • Unless --ignore-listing-checksum is passed, bisync currently computes hashes for one path even when there's no common hash with the other path (for example, a crypt remote.)
    • +
    • If both paths support checksums and have a common hash, AND --ignore-listing-checksum was not specified when creating the listings, --check-sync=only can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.) However, --check-sync=only will NOT include checksums if the previous listings were generated on a run using --ignore-listing-checksum. For a more robust integrity check of the current state, consider using check (or cryptcheck, if at least one path is a crypt remote.)
    • +
    +

    --resilient

    +

    Caution: this is an experimental feature. Use at your own risk!

    +

    By default, most errors or interruptions will cause bisync to abort and require --resync to recover. This is a safety feature, to prevent bisync from running again until a user checks things out. However, in some cases, bisync can go too far and enforce a lockout when one isn't actually necessary, like for certain less-serious errors that might resolve themselves on the next run. When --resilient is specified, bisync tries its best to recover and self-correct, and only requires --resync as a last resort when a human's involvement is absolutely necessary. The intended use case is for running bisync as a background process (such as via scheduled cron).

    +

    When using --resilient mode, bisync will still report the error and abort, however it will not lock out future runs -- allowing the possibility of retrying at the next normally scheduled time, without requiring a --resync first. Examples of such retryable errors include access test failures, missing listing files, and filter change detections. These safety features will still prevent the current run from proceeding -- the difference is that if conditions have improved by the time of the next run, that next run will be allowed to proceed. Certain more serious errors will still enforce a --resync lockout, even in --resilient mode, to prevent data loss.

    +

    Behavior of --resilient may change in a future version.

    Operation

    Runtime flow details

    bisync retains the listings of the Path1 and Path2 filesystems from the prior run. On each successive run it will:

    @@ -9767,30 +11221,36 @@ Optional Flags:
    - - - - + + + + - - + + + + + + + + - + - + @@ -9798,8 +11258,9 @@ Optional Flags:
    NameCleanUp ListR StreamUploadMultithreadUpload LinkSharing About EmptyDirNo No NoNo Yes No YesNo Yes YesNo No No YesNo No NoNo No No YesYes Yes YesYes Yes No NoYes Yes YesYes Yes No NoYes ‡‡ No YesNo Yes Yes YesNo No NoNo No No YesNo No YesNo Yes Yes YesYes No NoNo No No YesNo No YesNo No No YesNo Yes YesNo No No NoYes Yes YesNo Yes Yes YesNo No NoNo No No NoNo No YesNo No Yes YesNo No YesNo No No YesNo No NoNo No No YesYes Yes NoNo Yes Yes NoYes Yes NoNo Yes Yes YesNo No YesNo Yes Yes YesYes No NoNo Yes Yes YesYes No NoNo Yes Yes YesNo Yes YesNo No No NoNo Yes YesYes No No NoYes No NoNo Yes Yes YesNo No NoNo No No YesNo Yes YesNo No Yes NoYes Yes YesNo No No NoYes No NoNo Yes Yes YesYes No NoNo Yes Yes YesNo No NoNo Yes Yes YesYes No YesNo No Yes Yes
    Proton DriveYesNoYesYesYesNoNoNoNoYesYes
    QingStor No YesYes Yes NoNo No No No
    Quatrix by MaytechYesYesYesYesNoNoNoNoNoYesYes
    Seafile YesYes Yes YesNo Yes Yes YesNo No YesNo No Yes YesNo No YesNo No No YesNo No YesYes No No YesNo No YesNo Yes No YesNo Yes YesNo Yes No NoNo No NoNo No No NoNo No Yes ‡No No Yes YesYes No YesNo Yes Yes YesNo No NoNo No Yes YesNo No YesYes No Yes Yes
    Path1 new AND Path2 newFile is new on Path1 AND new on Path2Files renamed to _Path1 and _Path2rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2Path1 new/changed AND Path2 new/changed AND Path1 == Path2File is new/changed on Path1 AND new/changed on Path2 AND Path1 version is currently identical to Path2No changeNone
    Path2 newer AND Path1 changedFile is newer on Path2 AND also changed (newer/older/size) on Path1Path1 new AND Path2 newFile is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) Files renamed to _Path1 and _Path2 rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2
    Path2 newer AND Path1 changedFile is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2)Files renamed to _Path1 and _Path2rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2
    Path2 newer AND Path1 deleted File is newer on Path2 AND also deleted on Path1 Path2 version survives rclone copy Path2 to Path1
    Path2 deleted AND Path1 changed File is deleted on Path2 AND changed (newer/older/size) on Path1 Path1 version survives rclone copy Path1 to Path2
    Path1 deleted AND Path2 changed File is deleted on Path1 AND changed (newer/older/size) on Path2 Path2 version survives
    +

    As of rclone v1.64, bisync is now better at detecting false positive sync conflicts, which would previously have resulted in unnecessary renames and duplicates. Now, when bisync comes to a file that it wants to rename (because it is new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently identical (using the same underlying function as check.) If bisync concludes that the files are identical, it will skip them and move on. Otherwise, it will create renamed ..Path1 and ..Path2 duplicates, as before. This behavior also improves the experience of renaming directories, as a --resync is no longer required, so long as the same change has been made on both sides.

    All files changed check

    -

    if all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force to force the sync (whichever side has the changed timestamp files wins). Alternately, a --resync may be used (Path1 versions will be pushed to Path2). Consider the situation carefully and perhaps use --dry-run before you commit to the changes.

    +

    If all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force to force the sync (whichever side has the changed timestamp files wins). Alternately, a --resync may be used (Path1 versions will be pushed to Path2). Consider the situation carefully and perhaps use --dry-run before you commit to the changes.

    Modification time

    Bisync relies on file timestamps to identify changed files and will refuse to operate if backend lacks the modification time support.

    If you or your application should change the content of a file without changing the modification time then bisync will not notice the change, and thus will not copy it to the other side.

    @@ -9808,8 +11269,9 @@ Optional Flags:

    Error handling

    Certain bisync critical errors, such as file copy/move failing, will result in a bisync lockout of following runs. The lockout is asserted because the sync status and history of the Path1 and Path2 filesystems cannot be trusted, so it is safer to block any further changes until someone checks things out. The recovery is to do a --resync again.

    It is recommended to use --resync --dry-run --verbose initially and carefully review what changes will be made before running the --resync without --dry-run.

    -

    Most of these events come up due to a error status from an internal call. On such a critical error the {...}.path1.lst and {...}.path2.lst listing files are renamed to extension .lst-err, which blocks any future bisync runs (since the normal .lst files are not found). Bisync keeps them under bisync subdirectory of the rclone cache directory, typically at ${HOME}/.cache/rclone/bisync/ on Linux.

    +

    Most of these events come up due to an error status from an internal call. On such a critical error the {...}.path1.lst and {...}.path2.lst listing files are renamed to extension .lst-err, which blocks any future bisync runs (since the normal .lst files are not found). Bisync keeps them under bisync subdirectory of the rclone cache directory, typically at ${HOME}/.cache/rclone/bisync/ on Linux.

    Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs.

    +

    See also: --resilient

    Lock file

    When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug.

    Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.

    @@ -9819,19 +11281,36 @@ Optional Flags:

    Supported backends

    Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk

    It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below.

    -

    First release of rclone bisync requires that underlying backend supported the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync release.

    +

    First release of rclone bisync requires that underlying backend supports the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync release.

    Concurrent modifications

    When using Local, FTP or SFTP remotes rclone does not create temporary files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. This will be solved in a future release, there is no workaround at the moment.

    -

    Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. The solution is to sync at quiet times or filter out unnecessary directories and files.

    +

    Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. The currently recommended solution is to sync at quiet times or filter out unnecessary directories and files.

    +

    As an alternative approach, consider using --check-sync=false (and possibly --resilient) to make bisync more forgiving of filesystems that change during the sync. Be advised that this may cause bisync to miss events that occur during a bisync run, so it is a good idea to supplement this with a periodic independent integrity check, and corrective sync if diffs are found. For example, a possible sequence could look like this:

    +
      +
    1. Normally scheduled bisync run:
    2. +
    +
    rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
    +
      +
    1. Periodic independent integrity check (perhaps scheduled nightly or weekly):
    2. +
    +
    rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
    +
      +
    1. If diffs are found, you have some choices to correct them. If one side is more up-to-date and you want to make the other side match it, you could run:
    2. +
    +
    rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v  
    +

    (or switch Path1 and Path2 to make Path2 the source-of-truth)

    +

    Or, if neither side is totally up-to-date, you could run a --resync to bring them back into agreement (but remember that this could cause deleted files to re-appear.)

    +

    *Note also that rclone check does not currently include empty directories, so if you want to know if any empty directories are out of sync, consider alternatively running the above rclone sync command with --dry-run added.

    Empty directories

    -

    New empty directories on one path are not propagated to the other side. This is because bisync (and rclone) natively works on files not directories. The following sequence is a workaround but will not propagate the delete of an empty directory to the other side:

    -
    rclone bisync PATH1 PATH2
    -rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
    -rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
    +

    By default, new/deleted empty directories on one path are not propagated to the other side. This is because bisync (and rclone) natively works on files, not directories. However, this can be changed with the --create-empty-src-dirs flag, which works in much the same way as in sync and copy. When used, empty directories created or deleted on one side will also be created or deleted on the other side. The following should be noted: * --create-empty-src-dirs is not compatible with --remove-empty-dirs. Use only one or the other (or neither). * It is not recommended to switch back and forth between --create-empty-src-dirs and the default (no --create-empty-src-dirs) without running --resync. This is because it may appear as though all directories (not just the empty ones) were created/deleted, when actually you've just toggled between making them visible/invisible to bisync. It looks scarier than it is, but it's still probably best to stick to one or the other, and use --resync when you need to switch.

    Renamed directories

    -

    Renaming a folder on the Path1 side results is deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. Similarly, renaming a directory on both sides to the same name will result in creating ..path1 and ..path2 files on both sides. Currently the most effective and efficient method of renaming a directory is to rename it on both sides, then do a --resync.

    +

    Renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. Currently, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of rclone v1.64, a --resync is no longer required after doing so, as bisync will automatically detect that Path1 and Path2 are in agreement.)

    +

    --fast-list used by default

    +

    Unlike most other rclone commands, bisync uses --fast-list by default, for backends that support it. In many cases this is desirable, however, there are some scenarios in which bisync could be faster without --fast-list, and there is also a known issue concerning Google Drive users with many empty directories. For now, the recommended way to avoid using --fast-list is to add --disable ListR to all bisync commands. The default behavior may change in a future version.

    +

    Overridden Configs

    +

    When rclone detects an overridden config, it adds a suffix like {ABCDE} on the fly to the internal name of the remote. Bisync follows suit by including this suffix in its listing filenames. However, this suffix does not necessarily persist from run to run, especially if different flags are provided. So if next time the suffix assigned is {FGHIJ}, bisync will get confused, because it's looking for a listing file with {FGHIJ}, when the file it wants has {ABCDE}. As a result, it throws Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run and refuses to run again until the user runs a --resync (unless using --resilient). The best workaround at the moment is to set any backend-specific flags in the config file instead of specifying them with command flags. (You can still override them as needed for other rclone commands.)

    Case sensitivity

    -

    Synching with case-insensitive filesystems, such as Windows or Box, can result in file name conflicts. This will be fixed in a future release. The near term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg vs. smile.jpg).

    +

    Synching with case-insensitive filesystems, such as Windows or Box, can result in file name conflicts. This will be fixed in a future release. The near-term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg vs. smile.jpg).

    Windows support

    Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows GitHub runners.

    Drive letters are allowed, including drive letters mapped to network drives (rclone bisync J:\localsync GDrive:). If a drive letter is omitted, the shell current drive is the default. Drive letters are a single character follows by :, so cloud names must be more than one character long.

    @@ -9856,7 +11335,7 @@ rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --cr
  • Excluding such dirs first will make rclone operations (much) faster.
  • Specific files may also be excluded, as with the Dropbox exclusions example below.
  • -
  • Decide if its easier (or cleaner) to: +
  • Decide if it's easier (or cleaner) to:

    Updating golden results

    @@ -10201,8 +11680,111 @@ Options:
  • jwink3101/syncrclone
  • DavideRossi/upback
  • -

    Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in the Neil Fraser's article.

    +

    Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in Neil Fraser's article.

    Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general.

    +

    Changelog

    +

    v1.64

    + +

    Release signing

    +

    The hashes of the binary artefacts of the rclone release are signed with a public PGP/GPG key. This can be verified manually as described below.

    +

    The same mechanism is also used by rclone selfupdate to verify that the release has not been tampered with before the new update is installed. This checks the SHA256 hash and the signature with a public key compiled into the rclone binary.

    +

    Release signing key

    +

    You may obtain the release signing key from:

    + +

    After importing the key, verify that the fingerprint of one of the keys matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA as this key is used for signing.

    +

    We recommend that you cross-check the fingerprint shown above through the domains listed below. By cross-checking the integrity of the fingerprint across multiple domains you can be confident that you obtained the correct key.

    + +

    If you find anything that doesn't not match, please contact the developers at once.

    +

    How to verify the release

    +

    In the release directory you will see the release files and some files called MD5SUMS, SHA1SUMS and SHA256SUMS.

    +
    $ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http:
    +MD5SUMS
    +SHA1SUMS
    +SHA256SUMS
    +rclone-v1.63.1-freebsd-386.zip
    +rclone-v1.63.1-freebsd-amd64.zip
    +...
    +rclone-v1.63.1-windows-arm64.zip
    +rclone-v1.63.1.tar.gz
    +version.txt
    +

    The MD5SUMS, SHA1SUMS and SHA256SUMS contain hashes of the binary files in the release directory along with a signature.

    +

    For example:

    +
    $ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS
    +-----BEGIN PGP SIGNED MESSAGE-----
    +Hash: SHA1
    +
    +f6d1b2d7477475ce681bdce8cb56f7870f174cb6b2a9ac5d7b3764296ea4a113  rclone-v1.63.1-freebsd-386.zip
    +7266febec1f01a25d6575de51c44ddf749071a4950a6384e4164954dff7ac37e  rclone-v1.63.1-freebsd-amd64.zip
    +...
    +66ca083757fb22198309b73879831ed2b42309892394bf193ff95c75dff69c73  rclone-v1.63.1-windows-amd64.zip
    +bbb47c16882b6c5f2e8c1b04229378e28f68734c613321ef0ea2263760f74cd0  rclone-v1.63.1-windows-arm64.zip
    +-----BEGIN PGP SIGNATURE-----
    +
    +iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZLVKJQAKCRCTk14C/ztU
    ++pZuAJ0XJ+QWLP/3jCtkmgcgc4KAwd/rrwCcCRZQ7E+oye1FPY46HOVzCFU3L7g=
    +=8qrL
    +-----END PGP SIGNATURE-----
    +

    Download the files

    +

    The first step is to download the binary and SUMs file and verify that the SUMs you have downloaded match. Here we download rclone-v1.63.1-windows-amd64.zip - choose the binary (or binaries) appropriate to your architecture. We've also chosen the SHA256SUMS as these are the most secure. You could verify the other types of hash also for extra security. rclone selfupdate verifies just the SHA256SUMS.

    +
    $ mkdir /tmp/check
    +$ cd /tmp/check
    +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS .
    +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip .
    +

    Verify the signatures

    +

    First verify the signatures on the SHA256 file.

    +

    Import the key. See above for ways to verify this key is correct.

    +
    $ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
    +gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood <nick@craig-wood.com>" imported
    +gpg: Total number processed: 1
    +gpg:               imported: 1
    +

    Then check the signature:

    +
    $ gpg --verify SHA256SUMS 
    +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
    +gpg:                using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
    +gpg: Good signature from "Nick Craig-Wood <nick@craig-wood.com>" [ultimate]
    +

    Verify the signature was good and is using the fingerprint shown above.

    +

    Repeat for MD5SUMS and SHA1SUMS if desired.

    +

    Verify the hashes

    +

    Now that we know the signatures on the hashes are OK we can verify the binaries match the hashes, completing the verification.

    +
    $ sha256sum -c SHA256SUMS 2>&1 | grep OK
    +rclone-v1.63.1-windows-amd64.zip: OK
    +

    Or do the check with rclone

    +
    $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip 
    +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0
    +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1
    +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 49
    +2023/09/11 10:53:58 NOTICE: SHA256SUMS: 4 warning(s) suppressed...
    += rclone-v1.63.1-windows-amd64.zip
    +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 0 differences found
    +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 1 matching files
    +

    Verify signatures and hashes together

    +

    You can verify the signatures and hashes in one command line like this:

    +
    $ gpg --decrypt SHA256SUMS | sha256sum -c --ignore-missing
    +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
    +gpg:                using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
    +gpg: Good signature from "Nick Craig-Wood <nick@craig-wood.com>" [ultimate]
    +gpg:                 aka "Nick Craig-Wood <nick@memset.com>" [unknown]
    +rclone-v1.63.1-windows-amd64.zip: OK

    1Fichier

    This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API.

    Paths are specified as remote:path

    @@ -10679,6 +12261,7 @@ y/e/d> y
  • IBM COS S3
  • IDrive e2
  • IONOS Cloud
  • +
  • Leviia Object Storage
  • Liara Object Storage
  • Minio
  • Petabox
  • @@ -10689,6 +12272,7 @@ y/e/d> y
  • SeaweedFS
  • StackPath
  • Storj
  • +
  • Synology C2 Object Storage
  • Tencent Cloud Object Storage (COS)
  • Wasabi
  • @@ -10988,6 +12572,12 @@ $ rclone -q ls s3:cleanup-test $ rclone -q --s3-versions ls s3:cleanup-test 9 one.txt +

    Versions naming caveat

    +

    When using --s3-versions flag rclone is relying on the file name to work out whether the objects are versions or not. Versions' names are created by inserting timestamp between file name and its extension.

    +
            9 file.txt
    +        8 file-v2023-07-17-161032-000.txt
    +       16 file-v2023-06-15-141003-000.txt
    +

    If there are real files present with the same names as versions, then behaviour of --s3-versions can be unpredictable.

    Cleanup

    If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the --interactive/i or --dry-run flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.

    Restricted filename characters

    @@ -11134,7 +12724,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test

    As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.

    Standard options

    -

    Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).

    +

    Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).

    --s3-provider

    Choose your S3 provider.

    Properties:

    @@ -11201,6 +12791,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test +
  • "Leviia" +
  • "Liara"

    --s3-region

    +

    Region where your data stored.

    +

    Properties:

    + +

    --s3-region

    Region to connect to.

    Leave blank if you are using an S3 clone and you don't have a region.

    Properties:

  • --s3-endpoint

    +

    Endpoint for Leviia Object Storage API.

    +

    Properties:

    + +

    --s3-endpoint

    Endpoint for Liara Object Storage API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for OSS API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for OBS API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for Scaleway Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for StackPath Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for Google Cloud Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for Storj Gateway.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    +

    Endpoint for Synology C2 Object Storage API.

    +

    Properties:

    + +

    --s3-endpoint

    Endpoint for Tencent COS API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for RackCorp Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for Qiniu Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for S3 API.

    Required when using an S3 clone.

    Properties:

    Advanced options

    -

    Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).

    +

    Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).

    --s3-bucket-acl

    Canned ACL used when creating buckets.

    For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

    @@ -14162,8 +15844,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
  • Default: Slash,InvalidUtf8,Dot
  • --s3-memory-pool-flush-time

    -

    How often internal memory buffer pools will be flushed.

    -

    Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

    +

    How often internal memory buffer pools will be flushed. (no longer used)

    Properties:

    --s3-memory-pool-use-mmap

    -

    Whether to use mmap buffers in internal memory pool.

    +

    Whether to use mmap buffers in internal memory pool. (no longer used)

    Properties:

    -

    Metadata

    +

    Metadata

    User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.

    Here are the possible system metadata items for the s3 backend.

    @@ -14404,22 +16085,22 @@ Windows: "%USERPROFILE%\.aws\credentials"
    rclone backend restore remote: [options] [<arguments>+]

    This command can be used to restore one or more objects from GLACIER to normal storage.

    Usage Examples:

    -
    rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
    -rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
    -rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
    +
    rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
    +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
    +rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS

    This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags

    -
    rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
    +
    rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1

    All the objects shown will be marked for restore, then

    -
    rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard
    +
    rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1

    It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not.

    [
         {
             "Status": "OK",
    -        "Path": "test.txt"
    +        "Remote": "test.txt"
         },
         {
             "Status": "OK",
    -        "Path": "test/file4.txt"
    +        "Remote": "test/file4.txt"
         }
     ]

    Options:

    @@ -14428,6 +16109,40 @@ rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]"lifetime": Lifetime of the active copy in days
  • "priority": Priority of restore: Standard|Expedited|Bulk
  • +

    restore-status

    +

    Show the restore status for objects being restored from GLACIER to normal storage

    +
    rclone backend restore-status remote: [options] [<arguments>+]
    +

    This command can be used to show the status for objects being restored from GLACIER to normal storage.

    +

    Usage Examples:

    +
    rclone backend restore-status s3:bucket/path/to/object
    +rclone backend restore-status s3:bucket/path/to/directory
    +rclone backend restore-status -o all s3:bucket/path/to/directory
    +

    This command does not obey the filters.

    +

    It returns a list of status dictionaries.

    +
    [
    +    {
    +        "Remote": "file.txt",
    +        "VersionID": null,
    +        "RestoreStatus": {
    +            "IsRestoreInProgress": true,
    +            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
    +        },
    +        "StorageClass": "GLACIER"
    +    },
    +    {
    +        "Remote": "test.pdf",
    +        "VersionID": null,
    +        "RestoreStatus": {
    +            "IsRestoreInProgress": false,
    +            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
    +        },
    +        "StorageClass": "DEEP_ARCHIVE"
    +    }
    +]
    +

    Options:

    +

    list-multipart-uploads

    List the unfinished multipart uploads

    rclone backend list-multipart-uploads remote: [options] [<arguments>+]
    @@ -14481,6 +16196,17 @@ rclone backend cleanup -o max-age=7w s3:bucket/path/to/object rclone backend versioning s3:bucket Enabled rclone backend versioning s3:bucket Suspended

    It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning has been enabled the status can't be set back to "Unversioned".

    +

    set

    +

    Set command for updating the config parameters.

    +
    rclone backend set remote: [options] [<arguments>+]
    +

    This set command can be used to update the config parameters for a running s3 backend.

    +

    Usage Examples:

    +
    rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
    +rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
    +rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
    +

    The option keys are named as they are in the config file.

    +

    This rebuilds the connection to the s3 backend when it is called with the new parameters. Only new parameters need be passed as the values will default to those currently in use.

    +

    It doesn't return anything.

    Anonymous access to public buckets

    If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

    [anons3]
    @@ -14560,7 +16286,7 @@ Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
     ...
    -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
    +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
        \ (s3)
     ...
     Storage> s3
    @@ -14700,7 +16426,7 @@ Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
    - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
    + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
        \ (s3)
     [snip]
     Storage> 5
    @@ -14972,7 +16698,7 @@ Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
    -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
    +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
        \ (s3)
     [snip]
     Storage> s3
    @@ -15067,7 +16793,7 @@ name> ionos-fra
    Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> s3 @@ -15257,7 +16983,7 @@ n/s/q> n \ (alias) 4 / Amazon Drive \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi \ (s3) [snip] Storage> s3 @@ -16014,6 +17740,115 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y +

    Leviia Cloud Object Storage

    +

    Leviia Object Storage, backup and secure your data in a 100% French cloud, independent of GAFAM..

    +

    To configure access to Leviia, follow the steps below:

    +
      +
    1. Run rclone config and select n for a new remote.
    2. +
    +
    rclone config
    +No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
      +
    1. Give the name of the configuration. For example, name it 'leviia'.
    2. +
    +
    name> leviia
    +
      +
    1. Select s3 storage.
    2. +
    +
    Choose a number from below, or type in your own value
    + 1 / 1Fichier
    +   \ (fichier)
    + 2 / Akamai NetStorage
    +   \ (netstorage)
    + 3 / Alias for an existing remote
    +   \ (alias)
    + 4 / Amazon Drive
    +   \ (amazon cloud drive)
    + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
      +
    1. Select Leviia provider.
    2. +
    +
    Choose a number from below, or type in your own value
    +1 / Amazon Web Services (AWS) S3
    +   \ "AWS"
    +[snip]
    +15 / Leviia Object Storage
    +   \ (Leviia)
    +[snip]
    +provider> Leviia
    +
      +
    1. Enter your SecretId and SecretKey of Leviia.
    2. +
    +
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Enter a boolean value (true or false). Press Enter for the default ("false").
    +Choose a number from below, or type in your own value
    + 1 / Enter AWS credentials in the next step
    +   \ "false"
    + 2 / Get AWS credentials from the environment (env vars or IAM)
    +   \ "true"
    +env_auth> 1
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a string value. Press Enter for the default ("").
    +access_key_id> ZnIx.xxxxxxxxxxxxxxx
    +AWS Secret Access Key (password)
    +Leave blank for anonymous access or runtime credentials.
    +Enter a string value. Press Enter for the default ("").
    +secret_access_key> xxxxxxxxxxx
    +
      +
    1. Select endpoint for Leviia.
    2. +
    +
       / The default endpoint
    + 1 | Leviia.
    +   \ (s3.leviia.com)
    +[snip]
    +endpoint> 1
    +
      +
    1. Choose acl.
    2. +
    +
    Note that this ACL is applied when server-side copying objects as S3
    +doesn't copy the ACL from the source but rather writes a fresh one.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +   / Owner gets FULL_CONTROL.
    + 1 | No one else has access rights (default).
    +   \ (private)
    +   / Owner gets FULL_CONTROL.
    + 2 | The AllUsers group gets READ access.
    +   \ (public-read)
    +[snip]
    +acl> 1
    +Edit advanced config? (y/n)
    +y) Yes
    +n) No (default)
    +y/n> n
    +Remote config
    +--------------------
    +[leviia]
    +- type: s3
    +- provider: Leviia
    +- access_key_id: ZnIx.xxxxxxx
    +- secret_access_key: xxxxxxxx
    +- endpoint: s3.leviia.com
    +- acl: private
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +leviia                s3

    Liara

    Here is an example of making a Liara Object Storage configuration. First run:

    rclone config
    @@ -16318,7 +18153,7 @@ cos s3

    For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.

    Petabox

    Here is an example of making a Petabox configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    No remotes found, make a new one?
     n) New remote
    @@ -16532,2670 +18367,3875 @@ y/n> n

    Limitations

    rclone about is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    -

    Backblaze B2

    -

    B2 is Backblaze's cloud storage system.

    -

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    -

    Here is an example of making a b2 configuration. First run

    +

    Synology C2 Object Storage

    +

    Synology C2 Object Storage provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty.

    +

    The S3 compatible gateway is configured using rclone config with a type of s3 and with a provider name of Synology. Here is an example run of the configurator.

    +

    First run:

    rclone config
    -

    This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key.

    -
    No remotes found, make a new one?
    -n) New remote
    -q) Quit config
    -n/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Backblaze B2
    -   \ "b2"
    -[snip]
    -Storage> b2
    -Account ID or Application Key ID
    -account> 123456789abc
    -Application Key
    -key> 0123456789abcdef0123456789abcdef0123456789
    -Endpoint for the service - leave blank normally.
    -endpoint>
    -Remote config
    ---------------------
    -[remote]
    -account = 123456789abc
    -key = 0123456789abcdef0123456789abcdef0123456789
    -endpoint =
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    This remote is called remote and can now be used like this

    -

    See all buckets

    -
    rclone lsd remote:
    -

    Create a new bucket

    -
    rclone mkdir remote:bucket
    -

    List the contents of a bucket

    -
    rclone ls remote:bucket
    -

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync --interactive /home/local/directory remote:bucket
    -

    Application Keys

    -

    B2 supports multiple Application Keys for different access permission to B2 Buckets.

    -

    You can use these with rclone too; you will need to use rclone version 1.43 or later.

    -

    Follow Backblaze's docs to create an Application Key with the required permission and add the applicationKeyId as the account and the Application Key itself as the key.

    -

    Note that you must put the applicationKeyId as the account – you can't use the master Account ID. If you try then B2 will return 401 errors.

    -

    --fast-list

    -

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    -

    The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.

    -

    Modified times are used in syncing and are fully supported. Note that if a modification time needs to be updated on an object then it will create a new version of the object.

    -

    Restricted filename characters

    -

    In addition to the default restricted characters set the following characters are also replaced:

    -
    - - - - - - - - - - - - - - -
    CharacterValueReplacement
    \0x5C
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Note that in 2020-05 Backblaze started allowing  characters in file names. Rclone hasn't changed its encoding as this could cause syncs to re-transfer files. If you want rclone not to replace  then see the --b2-encoding flag below and remove the BackSlash from the string. This can be set in the config.

    -

    SHA1 checksums

    -

    The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.

    -

    Large files (bigger than the limit in --b2-upload-cutoff) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.

    -

    For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1.

    -

    Sources which don't support SHA1, in particular crypt will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).

    -

    Files sizes below --b2-upload-cutoff will always have an SHA1 regardless of the source.

    -

    Transfers

    -

    Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.

    -

    Note that uploading big files (bigger than 200 MiB by default) will use a 96 MiB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used.

    -

    Versions

    -

    When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete flag which would permanently remove the file instead of hiding it.

    -

    Old versions of files, where available, are visible using the --b2-versions flag.

    -

    It is also possible to view a bucket as it was at a certain point in time, using the --b2-version-at flag. This will show the file versions as they were at that time, showing files that have been deleted afterwards, and hiding files that were created since.

    -

    If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, e.g. rclone cleanup remote:bucket/path/to/stuff.

    -

    Note that cleanup will remove partially uploaded files from the bucket if they are more than a day old.

    -

    When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.

    -

    However delete will cause the current versions of the files to become hidden old versions.

    -

    Here is a session showing the listing and retrieval of an old version followed by a cleanup of the old versions.

    -

    Show current version and all the versions with --b2-versions flag.

    -
    $ rclone -q ls b2:cleanup-test
    -        9 one.txt
    -
    -$ rclone -q --b2-versions ls b2:cleanup-test
    -        9 one.txt
    -        8 one-v2016-07-04-141032-000.txt
    -       16 one-v2016-07-04-141003-000.txt
    -       15 one-v2016-07-02-155621-000.txt
    -

    Retrieve an old version

    -
    $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
    -
    -$ ls -l /tmp/one-v2016-07-04-141003-000.txt
    --rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt
    -

    Clean up all the old versions and show that they've gone.

    -
    $ rclone -q cleanup b2:cleanup-test
    -
    -$ rclone -q ls b2:cleanup-test
    -        9 one.txt
    -
    -$ rclone -q --b2-versions ls b2:cleanup-test
    -        9 one.txt
    -

    Data usage

    -

    It is useful to know how many requests are sent to the server in different scenarios.

    -

    All copy commands send the following 4 requests:

    -
    /b2api/v1/b2_authorize_account
    -/b2api/v1/b2_create_bucket
    -/b2api/v1/b2_list_buckets
    -/b2api/v1/b2_list_file_names
    -

    The b2_list_file_names request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue #818 causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent.

    -

    Uploading files that do not require chunking, will send 2 requests per file upload:

    -
    /b2api/v1/b2_get_upload_url
    -/b2api/v1/b2_upload_file/
    -

    Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk:

    -
    /b2api/v1/b2_start_large_file
    -/b2api/v1/b2_get_upload_part_url
    -/b2api/v1/b2_upload_part/
    -/b2api/v1/b2_finish_large_file
    -

    Versions

    -

    Versions can be viewed with the --b2-versions flag. When it is set rclone will show and act on older versions of files. For example

    -

    Listing without --b2-versions

    -
    $ rclone -q ls b2:cleanup-test
    -        9 one.txt
    -

    And with

    -
    $ rclone -q --b2-versions ls b2:cleanup-test
    -        9 one.txt
    -        8 one-v2016-07-04-141032-000.txt
    -       16 one-v2016-07-04-141003-000.txt
    -       15 one-v2016-07-02-155621-000.txt
    -

    Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.

    -

    Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.

    - -

    Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:

    -
    ./rclone link B2:bucket/path/to/file.txt
    -https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
    -
    -

    or if run on a directory you will get:

    -
    ./rclone link B2:bucket/path
    -https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
    -

    you can then use the authorization token (the part of the url from the ?Authorization= on) on any file path under that directory. For example:

    -
    https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
    -https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
    -https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
    -
    -

    Standard options

    -

    Here are the Standard options specific to b2 (Backblaze B2).

    -

    --b2-account

    -

    Account ID or Application Key ID.

    -

    Properties:

    - -

    --b2-key

    -

    Application Key.

    -

    Properties:

    - -

    --b2-hard-delete

    -

    Permanently delete files on remote removal, otherwise hide files.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to b2 (Backblaze B2).

    -

    --b2-endpoint

    -

    Endpoint for the service.

    -

    Leave blank normally.

    -

    Properties:

    - -

    --b2-test-mode

    -

    A flag string for X-Bz-Test-Mode header for debugging.

    -

    This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors:

    - -

    These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.

    -

    Properties:

    - -

    --b2-versions

    -

    Include old versions in directory listings.

    -

    Note that when using this no file write operations are permitted, so you can't upload files or delete them.

    -

    Properties:

    - -

    --b2-version-at

    -

    Show file versions as they were at the specified time.

    -

    Note that when using this no file write operations are permitted, so you can't upload files or delete them.

    -

    Properties:

    - -

    --b2-upload-cutoff

    -

    Cutoff for switching to chunked upload.

    -

    Files above this size will be uploaded in chunks of "--b2-chunk-size".

    -

    This value should be set no larger than 4.657 GiB (== 5 GB).

    -

    Properties:

    - -

    --b2-copy-cutoff

    -

    Cutoff for switching to multipart copy.

    -

    Any files larger than this that need to be server-side copied will be copied in chunks of this size.

    -

    The minimum is 0 and the maximum is 4.6 GiB.

    -

    Properties:

    - -

    --b2-chunk-size

    -

    Upload chunk size.

    -

    When uploading large files, chunk the file into this size.

    -

    Must fit in memory. These chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once.

    -

    5,000,000 Bytes is the minimum size.

    -

    Properties:

    - -

    --b2-disable-checksum

    -

    Disable checksums for large (> upload cutoff) files.

    -

    Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

    -

    Properties:

    - -

    --b2-download-url

    -

    Custom endpoint for downloads.

    -

    This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Rclone works with private buckets by sending an "Authorization" header. If the custom endpoint rewrites the requests for authentication, e.g., in Cloudflare Workers, this header needs to be handled properly. Leave blank if you want to use the endpoint provided by Backblaze.

    -

    The URL provided here SHOULD have the protocol and SHOULD NOT have a trailing slash or specify the /file/bucket subpath as rclone will request files with "{download_url}/file/{bucket_name}/{path}".

    -

    Example: > https://mysubdomain.mydomain.tld (No trailing "/", "file" or "bucket")

    -

    Properties:

    - -

    --b2-download-auth-duration

    -

    Time before the authorization token will expire in s or suffix ms|s|m|h|d.

    -

    The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.

    -

    Properties:

    - -

    --b2-memory-pool-flush-time

    -

    How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

    -

    Properties:

    - -

    --b2-memory-pool-use-mmap

    -

    Whether to use mmap buffers in internal memory pool.

    -

    Properties:

    - -

    --b2-encoding

    -

    The encoding for the backend.

    -

    See the encoding section in the overview for more info.

    -

    Properties:

    - -

    Limitations

    -

    rclone about is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about and rclone about

    -

    Box

    -

    Paths are specified as remote:path

    -

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. rclone config walks you through it.

    -

    Configuration

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    +

    This will guide you through an interactive setup process.

    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    +
     n/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Box
    -   \ "box"
    -[snip]
    -Storage> box
    -Box App Client Id - leave blank normally.
    -client_id> 
    -Box App Client Secret - leave blank normally.
    -client_secret>
    -Box App config.json location
    -Leave blank normally.
    -Enter a string value. Press Enter for the default ("").
    -box_config_file>
    -Box App Primary Access Token
    -Leave blank normally.
    -Enter a string value. Press Enter for the default ("").
    -access_token>
     
    -Enter a string value. Press Enter for the default ("user").
    -Choose a number from below, or type in your own value
    - 1 / Rclone should act on behalf of a user
    -   \ "user"
    - 2 / Rclone should act on behalf of a service account
    -   \ "enterprise"
    -box_sub_type>
    -Remote config
    -Use web browser to automatically authenticate rclone with remote?
    - * Say Y if the machine running rclone has a web browser you can use
    - * Say N if running rclone on a (remote) machine without web browser access
    -If not sure try Y. If Y failed, try N.
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id = 
    -client_secret = 
    -token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List directories in top level of your Box

    -
    rclone lsd remote:
    -

    List all the files in your Box

    -
    rclone ls remote:
    -

    To copy a local directory to an Box directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Using rclone with an Enterprise account with SSO

    -

    If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field.

    -

    Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set.

    -

    Invalid refresh token

    -

    According to the box docs:

    -
    -

    Each refresh_token is valid for one use in 60 days.

    -
    -

    This means that if you

    - -

    then rclone will return an error which includes the text Invalid refresh token.

    -

    To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in the remote setup docs, bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on.

    -

    Here is how to do it.

    -
    $ rclone config
    -Current remotes:
    +Enter name for new remote.1
    +name> syno
     
    -Name                 Type
    -====                 ====
    -remote               box
    -
    -e) Edit existing remote
    -n) New remote
    -d) Delete remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -e/n/d/r/c/s/q> e
    -Choose a number from below, or type in an existing value
    - 1 > remote
    -remote> remote
    ---------------------
    -[remote]
    -type = box
    -token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
    ---------------------
    -Edit remote
    -Value "client_id" = ""
    -Edit? (y/n)>
    -y) Yes
    -n) No
    -y/n> n
    -Value "client_secret" = ""
    -Edit? (y/n)>
    -y) Yes
    -n) No
    -y/n> n
    -Remote config
    -Already have a token - refresh?
    -y) Yes
    -n) No
    -y/n> y
    -Use web browser to automatically authenticate rclone with remote?
    - * Say Y if the machine running rclone has a web browser you can use
    - * Say N if running rclone on a (remote) machine without web browser access
    -If not sure try Y. If Y failed, try N.
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -type = box
    -token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    Modified time and hashes

    -

    Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    Box supports SHA1 type hashes, so you can use the --checksum flag.

    -

    Restricted filename characters

    -

    In addition to the default restricted characters set the following characters are also replaced:

    - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    \0x5C
    -

    File names can also not end with the following characters. These only get replaced if they are the last character in the name:

    - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    SP0x20
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Transfers

    -

    For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8 MiB so increasing --transfers will increase memory use.

    -

    Deleting files

    -

    Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.

    -

    Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it may take a very long time. Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI.

    -

    Root folder ID

    -

    You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your Box drive.

    -

    Normally you will leave this blank and rclone will determine the correct root to use itself.

    -

    However you can set this to restrict rclone to a specific folder hierarchy.

    -

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface.

    -

    So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config.

    -

    Standard options

    -

    Here are the Standard options specific to box (Box).

    -

    --box-client-id

    -

    OAuth Client Id.

    -

    Leave blank normally.

    -

    Properties:

    - -

    --box-client-secret

    -

    OAuth Client Secret.

    -

    Leave blank normally.

    -

    Properties:

    - -

    --box-box-config-file

    -

    Box App config.json location

    -

    Leave blank normally.

    -

    Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

    -

    Properties:

    - -

    --box-access-token

    -

    Box App Primary Access Token

    -

    Leave blank normally.

    -

    Properties:

    - -

    --box-box-sub-type

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to box (Box).

    -

    --box-token

    -

    OAuth Access Token as a JSON blob.

    -

    Properties:

    - -

    --box-auth-url

    -

    Auth server URL.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --box-token-url

    -

    Token server url.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --box-root-folder-id

    -

    Fill in for rclone to use a non root folder as its starting point.

    -

    Properties:

    - -

    --box-upload-cutoff

    -

    Cutoff for switching to multipart upload (>= 50 MiB).

    -

    Properties:

    - -

    --box-commit-retries

    -

    Max number of times to try committing a multipart file.

    -

    Properties:

    - -

    --box-list-chunk

    -

    Size of listing chunk 1-1000.

    -

    Properties:

    - -

    --box-owned-by

    -

    Only show items owned by the login (email address) passed in.

    -

    Properties:

    - -

    --box-encoding

    -

    The encoding for the backend.

    -

    See the encoding section in the overview for more info.

    -

    Properties:

    - -

    Limitations

    -

    Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    Box file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent (U+FF3C Fullwidth Reverse Solidus).

    -

    Box only supports filenames up to 255 characters in length.

    -

    Box has API rate limits that sometimes reduce the speed of rclone.

    -

    rclone about is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about and rclone about

    -

    Cache

    -

    The cache remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount.

    -

    Status

    -

    The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed.

    -

    The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone.

    -

    Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more.

    -

    Configuration

    -

    To get started you just need to have an existing remote which can be configured with cache.

    -

    Here is an example of how to make a remote called test-cache. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    -n) New remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -n/r/c/s/q> n
    -name> test-cache
     Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Cache a remote
    -   \ "cache"
    -[snip]
    -Storage> cache
    -Remote to cache.
    -Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
    -"myremote:bucket" or maybe "myremote:" (not recommended).
    -remote> local:/test
    -Optional: The URL of the Plex server
    -plex_url> http://127.0.0.1:32400
    -Optional: The username of the Plex user
    -plex_username> dummyusername
    -Optional: The password of the Plex user
    -y) Yes type in my own password
    -g) Generate random password
    -n) No leave this optional password blank
    -y/g/n> y
    -Enter the password:
    -password:
    -Confirm the password:
    -password:
    -The size of a chunk. Lower value good for slow connections but can affect seamless reading.
    -Default: 5M
    -Choose a number from below, or type in your own value
    - 1 / 1 MiB
    -   \ "1M"
    - 2 / 5 MiB
    -   \ "5M"
    - 3 / 10 MiB
    -   \ "10M"
    -chunk_size> 2
    -How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
    -Accepted units are: "s", "m", "h".
    -Default: 5m
    -Choose a number from below, or type in your own value
    - 1 / 1 hour
    -   \ "1h"
    - 2 / 24 hours
    -   \ "24h"
    - 3 / 24 hours
    -   \ "48h"
    -info_age> 2
    -The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
    -Default: 10G
    -Choose a number from below, or type in your own value
    - 1 / 500 MiB
    -   \ "500M"
    - 2 / 1 GiB
    -   \ "1G"
    - 3 / 10 GiB
    -   \ "10G"
    -chunk_total_size> 3
    -Remote config
    ---------------------
    -[test-cache]
    -remote = local:/test
    -plex_url = http://127.0.0.1:32400
    -plex_username = dummyusername
    -plex_password = *** ENCRYPTED ***
    -chunk_size = 5M
    -info_age = 48h
    -chunk_total_size = 10G
    -

    You can then use it like this,

    -

    List directories in top level of your drive

    -
    rclone lsd test-cache:
    -

    List all the files in your drive

    -
    rclone ls test-cache:
    -

    To start a cached mount

    -
    rclone mount --allow-other test-cache: /var/tmp/test-cache
    -

    Write Features

    -

    Offline uploading

    -

    In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a cache-tmp-upload-path.

    -

    A files goes through these states when using this feature:

    -
      -
    1. An upload is started (usually by copying a file on the cache remote)
    2. -
    3. When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included)
    4. -
    5. After cache-tmp-wait-time passes and the file is next in line, rclone move is used to move the file to the cloud provider
    6. -
    7. Reading the file still works during the upload but most modifications on it will be prohibited
    8. -
    9. Once the move is complete the file is unlocked for modifications as it becomes as any other regular file
    10. -
    11. If the file is being read through cache when it's actually deleted from the temporary path then cache will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)
    12. -
    -

    Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge flag.

    -

    Write Support

    -

    Writes are supported through cache. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using Offline uploading for reliable writes.

    -

    One special case is covered with cache-writes which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished.

    -

    Read Features

    -

    Multiple connections

    -

    To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them.

    -

    This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before.

    -

    Plex Integration

    -

    There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for.

    -

    Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers.

    -

    This integration opens the doorway to additional performance improvements which will be explored in the near future.

    -

    Note: If Plex options are not configured, cache will function with its configured options without adapting any of its settings.

    -

    How to enable? Run rclone config and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.

    -

    Affected settings: - cache-workers: Configured value during confirmed playback or 1 all the other times

    -
    Certificate Validation
    -

    When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.

    -

    The format for these URLs is the following:

    -

    https://ip-with-dots-replaced.server-hash.plex.direct:32400/

    -

    The ip-with-dots-replaced part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.

    -

    To get the server-hash part, the easiest way is to visit

    -

    https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token

    -

    This page will list all the available Plex servers for your account with at least one .plex.direct link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url value.

    -

    Known issues

    -

    Mount and --dir-cache-time

    -

    --dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache backend, it will manage its own entries based on the configured time.

    -

    To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time to a lower time than --cache-info-age. Default values are already configured in this way.

    -

    Windows support - Experimental

    -

    There are a couple of issues with Windows mount functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.

    -

    Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them.

    -

    Any reports or feedback on how cache behaves on this OS is greatly appreciated.

    - -

    Risk of throttling

    -

    Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures.

    -

    There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.

    -

    Some recommendations: - don't use a very small interval for entry information (--cache-info-age) - while writes aren't yet optimised, you can still write through cache which gives you the advantage of adding the file in the cache at the same time if configured to do so.

    -

    Future enhancements:

    - -

    cache and crypt

    -

    One common scenario is to keep your data encrypted in the cloud provider using the crypt remote. crypt uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.

    -

    There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

    -

    During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt

    -

    absolute remote paths

    -

    cache can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading / character.

    -

    This behavior is irrelevant for most backend types, but there are backends where a leading / changes the effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin and sftp:/bin will share the same cache folder, even if they represent a different directory on the SSH server.

    -

    Cache and Remote Control (--rc)

    -

    Cache supports the new --rc mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.

    -

    rc cache/expire

    -

    Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.

    -

    Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)

    -

    Standard options

    -

    Here are the Standard options specific to cache (Cache a remote).

    -

    --cache-remote

    -

    Remote to cache.

    -

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    -

    Properties:

    - -

    --cache-plex-url

    -

    The URL of the Plex server.

    -

    Properties:

    - -

    --cache-plex-username

    -

    The username of the Plex user.

    -

    Properties:

    - -

    --cache-plex-password

    -

    The password of the Plex user.

    -

    NB Input to this must be obscured - see rclone obscure.

    -

    Properties:

    - -

    --cache-chunk-size

    -

    The size of a chunk (partial file data).

    -

    Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.

    -

    Properties:

    - -

    --cache-info-age

    -

    How long to cache file structure information (directory listings, file size, times, etc.). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.

    -

    Properties:

    - -

    --cache-chunk-total-size

    -

    The total size that the chunks can take up on the local disk.

    -

    If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to cache (Cache a remote).

    -

    --cache-plex-token

    -

    The plex token for authentication - auto set normally.

    -

    Properties:

    - -

    --cache-plex-insecure

    -

    Skip all certificate verification when connecting to the Plex server.

    -

    Properties:

    - -

    --cache-db-path

    -

    Directory to store file structure metadata DB.

    -

    The remote name is used as the DB file name.

    -

    Properties:

    - -

    --cache-chunk-path

    -

    Directory to cache chunk files.

    -

    Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.

    -

    This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".

    -

    Properties:

    - -

    --cache-db-purge

    -

    Clear all the cached data for this remote on start.

    -

    Properties:

    - -

    --cache-chunk-clean-interval

    -

    How often should the cache perform cleanups of the chunk storage.

    -

    The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.

    -

    Properties:

    - -

    --cache-read-retries

    -

    How many times to retry a read from a cache storage.

    -

    Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.

    -

    For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.

    -

    Properties:

    - -

    --cache-workers

    -

    How many workers should run in parallel to download chunks.

    -

    Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.

    -

    Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.

    -

    Properties:

    - -

    --cache-chunk-no-memory

    -

    Disable the in-memory cache for storing chunks during streaming.

    -

    By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.

    -

    This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).

    -

    If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.

    -

    Properties:

    - -

    --cache-rps

    -

    Limits the number of requests per second to the source FS (-1 to disable).

    -

    This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads.

    -

    If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.

    -

    A good balance of all the other settings should make this setting useless but it is available to set for more special cases.

    -

    NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.

    -

    Properties:

    - -

    --cache-writes

    -

    Cache file data on writes through the FS.

    -

    If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.

    -

    Properties:

    - -

    --cache-tmp-upload-path

    -

    Directory to keep temporary files until they are uploaded.

    -

    This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.

    -

    Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider

    -

    Properties:

    - -

    --cache-tmp-wait-time

    -

    How long should files be stored in local cache before being uploaded.

    -

    This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.

    -

    Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.

    -

    Properties:

    - -

    --cache-db-wait-time

    -

    How long to wait for the DB to be available - 0 is unlimited.

    -

    Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.

    -

    If you set it to 0 then it will wait forever.

    -

    Properties:

    - -

    Backend commands

    -

    Here are the commands specific to the cache backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    -

    The help below will explain what arguments each command takes.

    -

    See the backend command for more info on how to pass options and arguments.

    -

    These can be run on a running backend using the rc command backend/command.

    -

    stats

    -

    Print stats on the cache backend in JSON format.

    -
    rclone backend stats remote: [options] [<arguments>+]
    -

    Chunker

    -

    The chunker overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.

    -

    Configuration

    -

    To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.

    -

    First check your chosen remote is working - we'll call it remote:path here. Note that anything inside remote:path will be chunked and anything outside won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote s3:bucket.

    -

    Now configure chunker using rclone config. We will call this one overlay to separate it from the remote itself.

    -
    No remotes found, make a new one?
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> overlay
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Transparently chunk/split large files
    -   \ "chunker"
    -[snip]
    -Storage> chunker
    -Remote to chunk/unchunk.
    -Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
    -"myremote:bucket" or maybe "myremote:" (not recommended).
     Enter a string value. Press Enter for the default ("").
    -remote> remote:path
    -Files larger than chunk size will be split in chunks.
    -Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
    -chunk_size> 100M
    -Choose how chunker handles hash sums. All modes but "none" require metadata.
    -Enter a string value. Press Enter for the default ("md5").
     Choose a number from below, or type in your own value
    - 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise
    -   \ "none"
    - 2 / MD5 for composite files
    -   \ "md5"
    - 3 / SHA1 for composite files
    -   \ "sha1"
    - 4 / MD5 for all files
    -   \ "md5all"
    - 5 / SHA1 for all files
    -   \ "sha1all"
    - 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported
    -   \ "md5quick"
    - 7 / Similar to "md5quick" but prefers SHA1 over MD5
    -   \ "sha1quick"
    -hash_type> md5
    +
    + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
    +   \ "s3"
    +
    +Storage> s3
    +
    +Choose your S3 provider.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    + 24 / Synology C2 Object Storage
    +   \ (Synology)
    +
    +provider> Synology
    +
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Enter a boolean value (true or false). Press Enter for the default ("false").
    +Choose a number from below, or type in your own value
    + 1 / Enter AWS credentials in the next step
    +   \ "false"
    + 2 / Get AWS credentials from the environment (env vars or IAM)
    +   \ "true"
    +
    +env_auth> 1
    +
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a string value. Press Enter for the default ("").
    +
    +access_key_id> accesskeyid
    +
    +AWS Secret Access Key (password)
    +Leave blank for anonymous access or runtime credentials.
    +Enter a string value. Press Enter for the default ("").
    +
    +secret_access_key> secretaccesskey
    +
    +Region where your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Europe Region 1
    +   \ (eu-001)
    + 2 / Europe Region 2
    +   \ (eu-002)
    + 3 / US Region 1
    +   \ (us-001)
    + 4 / US Region 2
    +   \ (us-002)
    + 5 / Asia (Taiwan)
    +   \ (tw-001)
    +
    +region > 1
    +
    +Option endpoint.
    +Endpoint for Synology C2 Object Storage API.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / EU Endpoint 1
    +   \ (eu-001.s3.synologyc2.net)
    + 2 / US Endpoint 1
    +   \ (us-001.s3.synologyc2.net)
    + 3 / TW Endpoint 1
    +   \ (tw-001.s3.synologyc2.net)
    +
    +endpoint> 1
    +
    +Option location_constraint.
    +Location constraint - must be set to match the Region.
    +Leave blank if not sure. Used when creating buckets only.
    +Enter a value. Press Enter to leave empty.
    +location_constraint>
    +
     Edit advanced config? (y/n)
     y) Yes
     n) No
    -y/n> n
    -Remote config
    ---------------------
    -[overlay]
    -type = chunker
    -remote = remote:bucket
    -chunk_size = 100M
    -hash_type = md5
    ---------------------
    -y) Yes this is OK
    +y/n> y
    +
    +Option no_check_bucket.
    +If set, don't attempt to check the bucket exists or create it.
    +This can be useful when trying to minimise the number of transactions
    +rclone does if you know the bucket exists already.
    +It can also be needed if the user you are using does not have bucket
    +creation permissions. Before v1.52.0 this would have passed silently
    +due to a bug.
    +Enter a boolean value (true or false). Press Enter for the default (true).
    +
    +no_check_bucket> true
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: Synology
    +- region: eu-001
    +- endpoint: eu-001.s3.synologyc2.net
    +- no_check_bucket: true
    +Keep this "syno" remote?
    +y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
    -y/e/d> y
    -

    Specifying the remote

    -

    In normal use, make sure the remote has a : in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will chunk stuff in that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory.

    -

    Chunking

    -

    When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process.

    -

    When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename, etc.). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact.

    -

    When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content.

    -

    When the list rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden.

    -

    List and other commands can sometimes come across composite files with missing or invalid chunks, e.g. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but proceed with current command. You can set the --chunker-fail-hard flag to have commands abort with error message in such cases.

    -

    Chunk names

    -

    The default chunk name format is *.rclone_chunk.###, hence by default chunk names are BIG_FILE_NAME.rclone_chunk.001, BIG_FILE_NAME.rclone_chunk.002 etc. You can configure another name format using the name_format configuration file option. The format uses asterisk * as a placeholder for the base file name and one or more consecutive hash characters # as a placeholder for sequential chunk number. There must be one and only one asterisk. The number of consecutive hash characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows user to start from 0, e.g. for compatibility with legacy software.

    -

    For example, if name format is big_*-##.part and original file name is data.txt and numbering starts from 0, then the first chunk will be named big_data.txt-00.part, the 99th chunk will be big_data.txt-98.part and the 302nd chunk will become big_data.txt-301.part.

    -

    Note that list assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files.

    -

    When using norename transactions, chunk names will additionally have a unique file version suffix. For example, BIG_FILE_NAME.rclone_chunk.001_bp562k.

    -

    Metadata

    -

    Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the none format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases.

    -

    Simple JSON metadata format

    -

    This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields:

    - -

    There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling.

    -

    No metadata

    -

    You can disable meta objects by setting the meta format option to none. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled.

    -

    Hashsums

    -

    Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of none, chunker will report hashsum as UNSUPPORTED.

    -

    Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it.

    -

    Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent.

    -

    If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all or sha1all. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting e.g. hash_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking.

    -

    Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: sha1quick and md5quick. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found.

    -

    Modified time

    -

    Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is none then chunker will use modification time of the first data chunk.

    -

    Migrations

    -

    The idiomatic way to migrate to a different chunk size, hash type, transaction style or chunk naming scheme is to:

    - -

    If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the list command but will eat up your account quota. Please note that the deletefile command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy command will copy only active chunks while the purge will remove everything including garbage.

    -

    Caveats and Limitations

    -

    Chunker requires wrapped remote to support server-side move (or copy + delete) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully.

    -

    Chunker encodes chunk number in file name, so with default name_format setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to e.g. *.rcc## and save 10 characters (provided at most 99 chunks per file).

    -

    Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers.

    -

    Chunker will not automatically rename existing chunks when you run rclone config on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above.

    -

    If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory).

    -

    Chunker included in rclone releases up to v1.54 can sometimes fail to detect metadata produced by recent versions of rclone. We recommend users to keep rclone up-to-date to avoid data corruption.

    -

    Changing transactions is dangerous and requires explicit migration.

    -

    Standard options

    -

    Here are the Standard options specific to chunker (Transparently chunk/split large files).

    -

    --chunker-remote

    -

    Remote to chunk/unchunk.

    -

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    -

    Properties:

    - -

    --chunker-chunk-size

    -

    Files larger than chunk size will be split in chunks.

    -

    Properties:

    - -

    --chunker-hash-type

    -

    Choose how chunker handles hash sums.

    -

    All modes but "none" require metadata.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to chunker (Transparently chunk/split large files).

    -

    --chunker-name-format

    -

    String format of chunk file names.

    -

    The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.

    -

    Properties:

    - -

    --chunker-start-from

    -

    Minimum valid chunk number. Usually 0 or 1.

    -

    By default chunk numbers start from 1.

    -

    Properties:

    - -

    --chunker-meta-format

    -

    Format of the metadata object or "none".

    -

    By default "simplejson". Metadata is a small JSON file named after the composite file.

    -

    Properties:

    - -

    --chunker-fail-hard

    -

    Choose how chunker should handle files with missing or invalid chunks.

    -

    Properties:

    - -

    --chunker-transactions

    -

    Choose how chunker should handle temporary files during transactions.

    -

    Properties:

    - -

    Citrix ShareFile

    -

    Citrix ShareFile is a secure file sharing and transfer service aimed as business.

    -

    Configuration

    -

    The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> remote
    -Type of storage to configure.
    -Enter a string value. Press Enter for the default ("").
    -Choose a number from below, or type in your own value
    -XX / Citrix Sharefile
    -   \ "sharefile"
    -Storage> sharefile
    -** See help for sharefile backend at: https://rclone.org/sharefile/ **
     
    -ID of the root folder
    +y/e/d> y
    +
    +#  Backblaze B2
    +
    +B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
    +
    +Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    +command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
    +
    +## Configuration
    +
    +Here is an example of making a b2 configuration.  First run
    +
    +    rclone config
    +
    +This will guide you through an interactive setup process.  To authenticate
    +you will either need your Account ID (a short hex number) and Master
    +Application Key (a long hex number) OR an Application Key, which is the
    +recommended method. See below for further details on generating and using
    +an Application Key.
    +
    +

    No remotes found, make a new one? n) New remote q) Quit config n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Backblaze B2  "b2" [snip] Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key key> 0123456789abcdef0123456789abcdef0123456789 Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = 123456789abc key = 0123456789abcdef0123456789abcdef0123456789 endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +This remote is called `remote` and can now be used like this
    +
    +See all buckets
    +
    +    rclone lsd remote:
    +
    +Create a new bucket
    +
    +    rclone mkdir remote:bucket
    +
    +List the contents of a bucket
    +
    +    rclone ls remote:bucket
    +
    +Sync `/home/local/directory` to the remote bucket, deleting any
    +excess files in the bucket.
    +
    +    rclone sync --interactive /home/local/directory remote:bucket
    +
    +### Application Keys
    +
    +B2 supports multiple [Application Keys for different access permission
    +to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html).
    +
    +You can use these with rclone too; you will need to use rclone version 1.43
    +or later.
    +
    +Follow Backblaze's docs to create an Application Key with the required
    +permission and add the `applicationKeyId` as the `account` and the
    +`Application Key` itself as the `key`.
    +
    +Note that you must put the _applicationKeyId_ as the `account` – you
    +can't use the master Account ID.  If you try then B2 will return 401
    +errors.
    +
    +### --fast-list
    +
    +This remote supports `--fast-list` which allows you to use fewer
    +transactions in exchange for more memory. See the [rclone
    +docs](https://rclone.org/docs/#fast-list) for more details.
    +
    +### Modified time
    +
    +The modified time is stored as metadata on the object as
    +`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
    +in the Backblaze standard.  Other tools should be able to use this as
    +a modified time.
    +
    +Modified times are used in syncing and are fully supported. Note that
    +if a modification time needs to be updated on an object then it will
    +create a new version of the object.
    +
    +### Restricted filename characters
    +
    +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    +the following characters are also replaced:
    +
    +| Character | Value | Replacement |
    +| --------- |:-----:|:-----------:|
    +| \         | 0x5C  | \           |
    +
    +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    +as they can't be used in JSON strings.
    +
    +Note that in 2020-05 Backblaze started allowing \ characters in file
    +names. Rclone hasn't changed its encoding as this could cause syncs to
    +re-transfer files. If you want rclone not to replace \ then see the
    +`--b2-encoding` flag below and remove the `BackSlash` from the
    +string. This can be set in the config.
    +
    +### SHA1 checksums
    +
    +The SHA1 checksums of the files are checked on upload and download and
    +will be used in the syncing process.
    +
    +Large files (bigger than the limit in `--b2-upload-cutoff`) which are
    +uploaded in chunks will store their SHA1 on the object as
    +`X-Bz-Info-large_file_sha1` as recommended by Backblaze.
    +
    +For a large file to be uploaded with an SHA1 checksum, the source
    +needs to support SHA1 checksums. The local disk supports SHA1
    +checksums so large file transfers from local disk will have an SHA1.
    +See [the overview](https://rclone.org/overview/#features) for exactly which remotes
    +support SHA1.
    +
    +Sources which don't support SHA1, in particular `crypt` will upload
    +large files without SHA1 checksums.  This may be fixed in the future
    +(see [#1767](https://github.com/rclone/rclone/issues/1767)).
    +
    +Files sizes below `--b2-upload-cutoff` will always have an SHA1
    +regardless of the source.
    +
    +### Transfers
    +
    +Backblaze recommends that you do lots of transfers simultaneously for
    +maximum speed.  In tests from my SSD equipped laptop the optimum
    +setting is about `--transfers 32` though higher numbers may be used
    +for a slight speed improvement. The optimum number for you may vary
    +depending on your hardware, how big the files are, how much you want
    +to load your computer, etc.  The default of `--transfers 4` is
    +definitely too low for Backblaze B2 though.
    +
    +Note that uploading big files (bigger than 200 MiB by default) will use
    +a 96 MiB RAM buffer by default.  There can be at most `--transfers` of
    +these in use at any moment, so this sets the upper limit on the memory
    +used.
    +
    +### Versions
    +
    +When rclone uploads a new version of a file it creates a [new version
    +of it](https://www.backblaze.com/b2/docs/file_versions.html).
    +Likewise when you delete a file, the old version will be marked hidden
    +and still be available.  Conversely, you may opt in to a "hard delete"
    +of files with the `--b2-hard-delete` flag which would permanently remove
    +the file instead of hiding it.
    +
    +Old versions of files, where available, are visible using the 
    +`--b2-versions` flag.
    +
    +It is also possible to view a bucket as it was at a certain point in time,
    +using the `--b2-version-at` flag. This will show the file versions as they
    +were at that time, showing files that have been deleted afterwards, and
    +hiding files that were created since.
    +
    +If you wish to remove all the old versions then you can use the
    +`rclone cleanup remote:bucket` command which will delete all the old
    +versions of files, leaving the current ones intact.  You can also
    +supply a path and only old versions under that path will be deleted,
    +e.g. `rclone cleanup remote:bucket/path/to/stuff`.
    +
    +Note that `cleanup` will remove partially uploaded files from the bucket
    +if they are more than a day old.
    +
    +When you `purge` a bucket, the current and the old versions will be
    +deleted then the bucket will be deleted.
    +
    +However `delete` will cause the current versions of the files to
    +become hidden old versions.
    +
    +Here is a session showing the listing and retrieval of an old
    +version followed by a `cleanup` of the old versions.
    +
    +Show current version and all the versions with `--b2-versions` flag.
    +
    +

    $ rclone -q ls b2:cleanup-test 9 one.txt

    +

    $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt

    +
    
    +Retrieve an old version
    +
    +

    $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp

    +

    $ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt

    +
    
    +Clean up all the old versions and show that they've gone.
    +
    +

    $ rclone -q cleanup b2:cleanup-test

    +

    $ rclone -q ls b2:cleanup-test 9 one.txt

    +

    $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt

    +
    
    +#### Versions naming caveat
    +
    +When using `--b2-versions` flag rclone is relying on the file name
    +to work out whether the objects are versions or not. Versions' names
    +are created by inserting timestamp between file name and its extension.
    +
        9 file.txt
    +    8 file-v2023-07-17-161032-000.txt
    +   16 file-v2023-06-15-141003-000.txt
    +
    If there are real files present with the same names as versions, then
    +behaviour of `--b2-versions` can be unpredictable.
    +
    +### Data usage
    +
    +It is useful to know how many requests are sent to the server in different scenarios.
    +
    +All copy commands send the following 4 requests:
    +
    +

    /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names

    +
    
    +The `b2_list_file_names` request will be sent once for every 1k files
    +in the remote path, providing the checksum and modification time of
    +the listed files. As of version 1.33 issue
    +[#818](https://github.com/rclone/rclone/issues/818) causes extra requests
    +to be sent when using B2 with Crypt. When a copy operation does not
    +require any files to be uploaded, no more requests will be sent.
    +
    +Uploading files that do not require chunking, will send 2 requests per
    +file upload:
    +
    +

    /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/

    +
    
    +Uploading files requiring chunking, will send 2 requests (one each to
    +start and finish the upload) and another 2 requests for each chunk:
    +
    +

    /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file

    +
    
    +#### Versions
    +
    +Versions can be viewed with the `--b2-versions` flag. When it is set
    +rclone will show and act on older versions of files.  For example
    +
    +Listing without `--b2-versions`
    +
    +

    $ rclone -q ls b2:cleanup-test 9 one.txt

    +
    
    +And with
    +
    +

    $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txt

    +
    
    +Showing that the current version is unchanged but older versions can
    +be seen.  These have the UTC date that they were uploaded to the
    +server to the nearest millisecond appended to them.
    +
    +Note that when using `--b2-versions` no file write operations are
    +permitted, so you can't upload files or delete them.
    +
    +### B2 and rclone link
    +
    +Rclone supports generating file share links for private B2 buckets.
    +They can either be for a file for example:
    +
    +

    ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx

    +
    
    +or if run on a directory you will get:
    +
    +

    ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx

    +
    
    +you can then use the authorization token (the part of the url from the
    + `?Authorization=` on) on any file path under that directory. For example:
    +
    +

    https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx

    +
    
    +
    +### Standard options
    +
    +Here are the Standard options specific to b2 (Backblaze B2).
    +
    +#### --b2-account
    +
    +Account ID or Application Key ID.
    +
    +Properties:
    +
    +- Config:      account
    +- Env Var:     RCLONE_B2_ACCOUNT
    +- Type:        string
    +- Required:    true
    +
    +#### --b2-key
    +
    +Application Key.
    +
    +Properties:
    +
    +- Config:      key
    +- Env Var:     RCLONE_B2_KEY
    +- Type:        string
    +- Required:    true
    +
    +#### --b2-hard-delete
    +
    +Permanently delete files on remote removal, otherwise hide files.
    +
    +Properties:
    +
    +- Config:      hard_delete
    +- Env Var:     RCLONE_B2_HARD_DELETE
    +- Type:        bool
    +- Default:     false
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to b2 (Backblaze B2).
    +
    +#### --b2-endpoint
    +
    +Endpoint for the service.
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      endpoint
    +- Env Var:     RCLONE_B2_ENDPOINT
    +- Type:        string
    +- Required:    false
    +
    +#### --b2-test-mode
    +
    +A flag string for X-Bz-Test-Mode header for debugging.
    +
    +This is for debugging purposes only. Setting it to one of the strings
    +below will cause b2 to return specific errors:
    +
    +  * "fail_some_uploads"
    +  * "expire_some_account_authorization_tokens"
    +  * "force_cap_exceeded"
    +
    +These will be set in the "X-Bz-Test-Mode" header which is documented
    +in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
    +
    +Properties:
    +
    +- Config:      test_mode
    +- Env Var:     RCLONE_B2_TEST_MODE
    +- Type:        string
    +- Required:    false
    +
    +#### --b2-versions
    +
    +Include old versions in directory listings.
    +
    +Note that when using this no file write operations are permitted,
    +so you can't upload files or delete them.
    +
    +Properties:
    +
    +- Config:      versions
    +- Env Var:     RCLONE_B2_VERSIONS
    +- Type:        bool
    +- Default:     false
    +
    +#### --b2-version-at
    +
    +Show file versions as they were at the specified time.
    +
    +Note that when using this no file write operations are permitted,
    +so you can't upload files or delete them.
    +
    +Properties:
    +
    +- Config:      version_at
    +- Env Var:     RCLONE_B2_VERSION_AT
    +- Type:        Time
    +- Default:     off
    +
    +#### --b2-upload-cutoff
    +
    +Cutoff for switching to chunked upload.
    +
    +Files above this size will be uploaded in chunks of "--b2-chunk-size".
    +
    +This value should be set no larger than 4.657 GiB (== 5 GB).
    +
    +Properties:
    +
    +- Config:      upload_cutoff
    +- Env Var:     RCLONE_B2_UPLOAD_CUTOFF
    +- Type:        SizeSuffix
    +- Default:     200Mi
    +
    +#### --b2-copy-cutoff
    +
    +Cutoff for switching to multipart copy.
    +
    +Any files larger than this that need to be server-side copied will be
    +copied in chunks of this size.
    +
    +The minimum is 0 and the maximum is 4.6 GiB.
    +
    +Properties:
    +
    +- Config:      copy_cutoff
    +- Env Var:     RCLONE_B2_COPY_CUTOFF
    +- Type:        SizeSuffix
    +- Default:     4Gi
    +
    +#### --b2-chunk-size
    +
    +Upload chunk size.
    +
    +When uploading large files, chunk the file into this size.
    +
    +Must fit in memory. These chunks are buffered in memory and there
    +might a maximum of "--transfers" chunks in progress at once.
    +
    +5,000,000 Bytes is the minimum size.
    +
    +Properties:
    +
    +- Config:      chunk_size
    +- Env Var:     RCLONE_B2_CHUNK_SIZE
    +- Type:        SizeSuffix
    +- Default:     96Mi
    +
    +#### --b2-upload-concurrency
    +
    +Concurrency for multipart uploads.
    +
    +This is the number of chunks of the same file that are uploaded
    +concurrently.
    +
    +Note that chunks are stored in memory and there may be up to
    +"--transfers" * "--b2-upload-concurrency" chunks stored at once
    +in memory.
    +
    +Properties:
    +
    +- Config:      upload_concurrency
    +- Env Var:     RCLONE_B2_UPLOAD_CONCURRENCY
    +- Type:        int
    +- Default:     16
    +
    +#### --b2-disable-checksum
    +
    +Disable checksums for large (> upload cutoff) files.
    +
    +Normally rclone will calculate the SHA1 checksum of the input before
    +uploading it so it can add it to metadata on the object. This is great
    +for data integrity checking but can cause long delays for large files
    +to start uploading.
    +
    +Properties:
    +
    +- Config:      disable_checksum
    +- Env Var:     RCLONE_B2_DISABLE_CHECKSUM
    +- Type:        bool
    +- Default:     false
    +
    +#### --b2-download-url
    +
    +Custom endpoint for downloads.
    +
    +This is usually set to a Cloudflare CDN URL as Backblaze offers
    +free egress for data downloaded through the Cloudflare network.
    +Rclone works with private buckets by sending an "Authorization" header.
    +If the custom endpoint rewrites the requests for authentication,
    +e.g., in Cloudflare Workers, this header needs to be handled properly.
    +Leave blank if you want to use the endpoint provided by Backblaze.
    +
    +The URL provided here SHOULD have the protocol and SHOULD NOT have
    +a trailing slash or specify the /file/bucket subpath as rclone will
    +request files with "{download_url}/file/{bucket_name}/{path}".
    +
    +Example:
    +> https://mysubdomain.mydomain.tld
    +(No trailing "/", "file" or "bucket")
    +
    +Properties:
    +
    +- Config:      download_url
    +- Env Var:     RCLONE_B2_DOWNLOAD_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --b2-download-auth-duration
    +
    +Time before the authorization token will expire in s or suffix ms|s|m|h|d.
    +
    +The duration before the download authorization token will expire.
    +The minimum value is 1 second. The maximum value is one week.
    +
    +Properties:
    +
    +- Config:      download_auth_duration
    +- Env Var:     RCLONE_B2_DOWNLOAD_AUTH_DURATION
    +- Type:        Duration
    +- Default:     1w
    +
    +#### --b2-memory-pool-flush-time
    +
    +How often internal memory buffer pools will be flushed. (no longer used)
    +
    +Properties:
    +
    +- Config:      memory_pool_flush_time
    +- Env Var:     RCLONE_B2_MEMORY_POOL_FLUSH_TIME
    +- Type:        Duration
    +- Default:     1m0s
    +
    +#### --b2-memory-pool-use-mmap
    +
    +Whether to use mmap buffers in internal memory pool. (no longer used)
    +
    +Properties:
    +
    +- Config:      memory_pool_use_mmap
    +- Env Var:     RCLONE_B2_MEMORY_POOL_USE_MMAP
    +- Type:        bool
    +- Default:     false
    +
    +#### --b2-encoding
    +
    +The encoding for the backend.
    +
    +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    +
    +Properties:
    +
    +- Config:      encoding
    +- Env Var:     RCLONE_B2_ENCODING
    +- Type:        MultiEncoder
    +- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
    +
    +
    +
    +## Limitations
    +
    +`rclone about` is not supported by the B2 backend. Backends without
    +this capability cannot determine free space for an rclone mount or
    +use policy `mfs` (most free space) as a member of an rclone union
    +remote.
    +
    +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
    +
    +#  Box
    +
    +Paths are specified as `remote:path`
    +
    +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    +
    +The initial setup for Box involves getting a token from Box which you
    +can do either in your browser, or with a config.json downloaded from Box
    +to use JWT authentication.  `rclone config` walks you through it.
    +
    +## Configuration
    +
    +Here is an example of how to make a remote called `remote`.  First run:
    +
    +     rclone config
    +
    +This will guide you through an interactive setup process:
    +
    +

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Box  "box" [snip] Storage> box Box App Client Id - leave blank normally. client_id> Box App Client Secret - leave blank normally. client_secret> Box App config.json location Leave blank normally. Enter a string value. Press Enter for the default (""). box_config_file> Box App Primary Access Token Leave blank normally. Enter a string value. Press Enter for the default (""). access_token>

    +

    Enter a string value. Press Enter for the default ("user"). Choose a number from below, or type in your own value 1 / Rclone should act on behalf of a user  "user" 2 / Rclone should act on behalf of a service account  "enterprise" box_sub_type> Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
    +machine with no Internet browser available.
    +
    +Note that rclone runs a webserver on your local machine to collect the
    +token as returned from Box. This only runs from the moment it opens
    +your browser to the moment you get back the verification code.  This
    +is on `http://127.0.0.1:53682/` and this it may require you to unblock
    +it temporarily if you are running a host firewall.
    +
    +Once configured you can then use `rclone` like this,
    +
    +List directories in top level of your Box
    +
    +    rclone lsd remote:
    +
    +List all the files in your Box
    +
    +    rclone ls remote:
    +
    +To copy a local directory to an Box directory called backup
    +
    +    rclone copy /home/source remote:backup
    +
    +### Using rclone with an Enterprise account with SSO
    +
    +If you have an "Enterprise" account type with Box with single sign on
    +(SSO), you need to create a password to use Box with rclone. This can
    +be done at your Enterprise Box account by going to Settings, "Account"
    +Tab, and then set the password in the "Authentication" field.
    +
    +Once you have done this, you can setup your Enterprise Box account
    +using the same procedure detailed above in the, using the password you
    +have just set.
    +
    +### Invalid refresh token
    +
    +According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens):
    +
    +> Each refresh_token is valid for one use in 60 days.
    +
    +This means that if you
    +
    +  * Don't use the box remote for 60 days
    +  * Copy the config file with a box refresh token in and use it in two places
    +  * Get an error on a token refresh
    +
    +then rclone will return an error which includes the text `Invalid
    +refresh token`.
    +
    +To fix this you will need to use oauth2 again to update the refresh
    +token.  You can use the methods in [the remote setup
    +docs](https://rclone.org/remote_setup/), bearing in mind that if you use the copy the
    +config file method, you should not use that remote on the computer you
    +did the authentication on.
    +
    +Here is how to do it.
    +
    +

    $ rclone config Current remotes:

    +

    Name Type ==== ==== remote box

    +
      +
    1. Edit existing remote
    2. +
    3. New remote
    4. +
    5. Delete remote
    6. +
    7. Rename remote
    8. +
    9. Copy remote
    10. +
    11. Set configuration password
    12. +
    13. Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote -------------------- [remote] type = box token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} -------------------- Edit remote Value "client_id" = "" Edit? (y/n)>
    14. +
    15. Yes
    16. +
    17. No y/n> n Value "client_secret" = "" Edit? (y/n)>
    18. +
    19. Yes
    20. +
    21. No y/n> n Remote config Already have a token - refresh?
    22. +
    23. Yes
    24. +
    25. No y/n> y Use web browser to automatically authenticate rclone with remote?
    26. +
    + +
      +
    1. Yes
    2. +
    3. No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = box token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} --------------------
    4. +
    5. Yes this is OK
    6. +
    7. Edit this remote
    8. +
    9. Delete this remote y/e/d> y
    10. +
    +
    
    +### Modified time and hashes
    +
    +Box allows modification times to be set on objects accurate to 1
    +second.  These will be used to detect whether objects need syncing or
    +not.
    +
    +Box supports SHA1 type hashes, so you can use the `--checksum`
    +flag.
    +
    +### Restricted filename characters
    +
    +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    +the following characters are also replaced:
    +
    +| Character | Value | Replacement |
    +| --------- |:-----:|:-----------:|
    +| \         | 0x5C  | \           |
    +
    +File names can also not end with the following characters.
    +These only get replaced if they are the last character in the name:
    +
    +| Character | Value | Replacement |
    +| --------- |:-----:|:-----------:|
    +| SP        | 0x20  | ␠           |
    +
    +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    +as they can't be used in JSON strings.
    +
    +### Transfers
    +
    +For files above 50 MiB rclone will use a chunked transfer.  Rclone will
    +upload up to `--transfers` chunks at the same time (shared among all
    +the multipart uploads).  Chunks are buffered in memory and are
    +normally 8 MiB so increasing `--transfers` will increase memory use.
    +
    +### Deleting files
    +
    +Depending on the enterprise settings for your user, the item will
    +either be actually deleted from Box or moved to the trash.
    +
    +Emptying the trash is supported via the rclone however cleanup command
    +however this deletes every trashed file and folder individually so it
    +may take a very long time. 
    +Emptying the trash via the  WebUI does not have this limitation 
    +so it is advised to empty the trash via the WebUI.
    +
    +### Root folder ID
    +
    +You can set the `root_folder_id` for rclone.  This is the directory
    +(identified by its `Folder ID`) that rclone considers to be the root
    +of your Box drive.
    +
    +Normally you will leave this blank and rclone will determine the
    +correct root to use itself.
    +
    +However you can set this to restrict rclone to a specific folder
    +hierarchy.
    +
    +In order to do this you will have to find the `Folder ID` of the
    +directory you wish rclone to display.  This will be the last segment
    +of the URL when you open the relevant folder in the Box web
    +interface.
    +
    +So if the folder you want rclone to use has a URL which looks like
    +`https://app.box.com/folder/11xxxxxxxxx8`
    +in the browser, then you use `11xxxxxxxxx8` as
    +the `root_folder_id` in the config.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to box (Box).
    +
    +#### --box-client-id
    +
    +OAuth Client Id.
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      client_id
    +- Env Var:     RCLONE_BOX_CLIENT_ID
    +- Type:        string
    +- Required:    false
    +
    +#### --box-client-secret
    +
    +OAuth Client Secret.
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      client_secret
    +- Env Var:     RCLONE_BOX_CLIENT_SECRET
    +- Type:        string
    +- Required:    false
    +
    +#### --box-box-config-file
    +
    +Box App config.json location
    +
    +Leave blank normally.
    +
    +Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
    +
    +Properties:
    +
    +- Config:      box_config_file
    +- Env Var:     RCLONE_BOX_BOX_CONFIG_FILE
    +- Type:        string
    +- Required:    false
    +
    +#### --box-access-token
    +
    +Box App Primary Access Token
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      access_token
    +- Env Var:     RCLONE_BOX_ACCESS_TOKEN
    +- Type:        string
    +- Required:    false
    +
    +#### --box-box-sub-type
    +
    +
    +
    +Properties:
    +
    +- Config:      box_sub_type
    +- Env Var:     RCLONE_BOX_BOX_SUB_TYPE
    +- Type:        string
    +- Default:     "user"
    +- Examples:
    +    - "user"
    +        - Rclone should act on behalf of a user.
    +    - "enterprise"
    +        - Rclone should act on behalf of a service account.
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to box (Box).
    +
    +#### --box-token
    +
    +OAuth Access Token as a JSON blob.
    +
    +Properties:
    +
    +- Config:      token
    +- Env Var:     RCLONE_BOX_TOKEN
    +- Type:        string
    +- Required:    false
    +
    +#### --box-auth-url
    +
    +Auth server URL.
    +
    +Leave blank to use the provider defaults.
    +
    +Properties:
    +
    +- Config:      auth_url
    +- Env Var:     RCLONE_BOX_AUTH_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --box-token-url
    +
    +Token server url.
    +
    +Leave blank to use the provider defaults.
    +
    +Properties:
    +
    +- Config:      token_url
    +- Env Var:     RCLONE_BOX_TOKEN_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --box-root-folder-id
    +
    +Fill in for rclone to use a non root folder as its starting point.
    +
    +Properties:
    +
    +- Config:      root_folder_id
    +- Env Var:     RCLONE_BOX_ROOT_FOLDER_ID
    +- Type:        string
    +- Default:     "0"
    +
    +#### --box-upload-cutoff
    +
    +Cutoff for switching to multipart upload (>= 50 MiB).
    +
    +Properties:
    +
    +- Config:      upload_cutoff
    +- Env Var:     RCLONE_BOX_UPLOAD_CUTOFF
    +- Type:        SizeSuffix
    +- Default:     50Mi
    +
    +#### --box-commit-retries
    +
    +Max number of times to try committing a multipart file.
    +
    +Properties:
    +
    +- Config:      commit_retries
    +- Env Var:     RCLONE_BOX_COMMIT_RETRIES
    +- Type:        int
    +- Default:     100
    +
    +#### --box-list-chunk
    +
    +Size of listing chunk 1-1000.
    +
    +Properties:
    +
    +- Config:      list_chunk
    +- Env Var:     RCLONE_BOX_LIST_CHUNK
    +- Type:        int
    +- Default:     1000
    +
    +#### --box-owned-by
    +
    +Only show items owned by the login (email address) passed in.
    +
    +Properties:
    +
    +- Config:      owned_by
    +- Env Var:     RCLONE_BOX_OWNED_BY
    +- Type:        string
    +- Required:    false
    +
    +#### --box-impersonate
    +
    +Impersonate this user ID when using a service account.
    +
    +Settng this flag allows rclone, when using a JWT service account, to
    +act on behalf of another user by setting the as-user header.
    +
    +The user ID is the Box identifier for a user. User IDs can found for
    +any user via the GET /users endpoint, which is only available to
    +admins, or by calling the GET /users/me endpoint with an authenticated
    +user session.
    +
    +See: https://developer.box.com/guides/authentication/jwt/as-user/
    +
    +
    +Properties:
    +
    +- Config:      impersonate
    +- Env Var:     RCLONE_BOX_IMPERSONATE
    +- Type:        string
    +- Required:    false
    +
    +#### --box-encoding
    +
    +The encoding for the backend.
    +
    +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    +
    +Properties:
    +
    +- Config:      encoding
    +- Env Var:     RCLONE_BOX_ENCODING
    +- Type:        MultiEncoder
    +- Default:     Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
    +
    +
    +
    +## Limitations
    +
    +Note that Box is case insensitive so you can't have a file called
    +"Hello.doc" and one called "hello.doc".
    +
    +Box file names can't have the `\` character in.  rclone maps this to
    +and from an identical looking unicode equivalent `\` (U+FF3C Fullwidth
    +Reverse Solidus).
    +
    +Box only supports filenames up to 255 characters in length.
    +
    +Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone.
    +
    +`rclone about` is not supported by the Box backend. Backends without
    +this capability cannot determine free space for an rclone mount or
    +use policy `mfs` (most free space) as a member of an rclone union
    +remote.
    +
    +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
    +
    +## Get your own Box App ID
    +
    +Here is how to create your own Box App ID for rclone:
    +
    +1. Go to the [Box Developer Console](https://app.box.com/developers/console)
    +and login, then click `My Apps` on the sidebar. Click `Create New App`
    +and select `Custom App`.
    +
    +2. In the first screen on the box that pops up, you can pretty much enter
    +whatever you want. The `App Name` can be whatever. For `Purpose` choose
    +automation to avoid having to fill out anything else. Click `Next`.
    +
    +3. In the second screen of the creation screen, select
    +`User Authentication (OAuth 2.0)`. Then click `Create App`.
    +
    +4. You should now be on the `Configuration` tab of your new app. If not,
    +click on it at the top of the webpage. Copy down `Client ID`
    +and `Client Secret`, you'll need those for rclone.
    +
    +5. Under "OAuth 2.0 Redirect URI", add `http://127.0.0.1:53682/`
    +
    +6. For `Application Scopes`, select `Read all files and folders stored in Box`
    +and `Write all files and folders stored in box` (assuming you want to do both).
    +Leave others unchecked. Click `Save Changes` at the top right.
    +
    +#  Cache
    +
    +The `cache` remote wraps another existing remote and stores file structure
    +and its data for long running tasks like `rclone mount`.
    +
    +## Status
    +
    +The cache backend code is working but it currently doesn't
    +have a maintainer so there are [outstanding bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) which aren't getting fixed.
    +
    +The cache backend is due to be phased out in favour of the VFS caching
    +layer eventually which is more tightly integrated into rclone.
    +
    +Until this happens we recommend only using the cache backend if you
    +find you can't work without it. There are many docs online describing
    +the use of the cache backend to minimize API hits and by-and-large
    +these are out of date and the cache backend isn't needed in those
    +scenarios any more.
    +
    +## Configuration
    +
    +To get started you just need to have an existing remote which can be configured
    +with `cache`.
    +
    +Here is an example of how to make a remote called `test-cache`.  First run:
    +
    +     rclone config
    +
    +This will guide you through an interactive setup process:
    +
    +

    No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Cache a remote  "cache" [snip] Storage> cache Remote to cache. Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> local:/test Optional: The URL of the Plex server plex_url> http://127.0.0.1:32400 Optional: The username of the Plex user plex_username> dummyusername Optional: The password of the Plex user y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: The size of a chunk. Lower value good for slow connections but can affect seamless reading. Default: 5M Choose a number from below, or type in your own value 1 / 1 MiB  "1M" 2 / 5 MiB  "5M" 3 / 10 MiB  "10M" chunk_size> 2 How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. Accepted units are: "s", "m", "h". Default: 5m Choose a number from below, or type in your own value 1 / 1 hour  "1h" 2 / 24 hours  "24h" 3 / 24 hours  "48h" info_age> 2 The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. Default: 10G Choose a number from below, or type in your own value 1 / 500 MiB  "500M" 2 / 1 GiB  "1G" 3 / 10 GiB  "10G" chunk_total_size> 3 Remote config -------------------- [test-cache] remote = local:/test plex_url = http://127.0.0.1:32400 plex_username = dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age = 48h chunk_total_size = 10G

    +
    
    +You can then use it like this,
    +
    +List directories in top level of your drive
    +
    +    rclone lsd test-cache:
    +
    +List all the files in your drive
    +
    +    rclone ls test-cache:
    +
    +To start a cached mount
    +
    +    rclone mount --allow-other test-cache: /var/tmp/test-cache
    +
    +### Write Features ###
    +
    +### Offline uploading ###
    +
    +In an effort to make writing through cache more reliable, the backend 
    +now supports this feature which can be activated by specifying a
    +`cache-tmp-upload-path`.
    +
    +A files goes through these states when using this feature:
    +
    +1. An upload is started (usually by copying a file on the cache remote)
    +2. When the copy to the temporary location is complete the file is part 
    +of the cached remote and looks and behaves like any other file (reading included)
    +3. After `cache-tmp-wait-time` passes and the file is next in line, `rclone move` 
    +is used to move the file to the cloud provider
    +4. Reading the file still works during the upload but most modifications on it will be prohibited
    +5. Once the move is complete the file is unlocked for modifications as it
    +becomes as any other regular file
    +6. If the file is being read through `cache` when it's actually
    +deleted from the temporary path then `cache` will simply swap the source
    +to the cloud provider without interrupting the reading (small blip can happen though)
    +
    +Files are uploaded in sequence and only one file is uploaded at a time.
    +Uploads will be stored in a queue and be processed based on the order they were added.
    +The queue and the temporary storage is persistent across restarts but
    +can be cleared on startup with the `--cache-db-purge` flag.
    +
    +### Write Support ###
    +
    +Writes are supported through `cache`.
    +One caveat is that a mounted cache remote does not add any retry or fallback
    +mechanism to the upload operation. This will depend on the implementation
    +of the wrapped remote. Consider using `Offline uploading` for reliable writes.
    +
    +One special case is covered with `cache-writes` which will cache the file
    +data at the same time as the upload when it is enabled making it available
    +from the cache store immediately once the upload is finished.
    +
    +### Read Features ###
    +
    +#### Multiple connections ####
    +
    +To counter the high latency between a local PC where rclone is running
    +and cloud providers, the cache remote can split multiple requests to the
    +cloud provider for smaller file chunks and combines them together locally
    +where they can be available almost immediately before the reader usually
    +needs them.
    +
    +This is similar to buffering when media files are played online. Rclone
    +will stay around the current marker but always try its best to stay ahead
    +and prepare the data before.
    +
    +#### Plex Integration ####
    +
    +There is a direct integration with Plex which allows cache to detect during reading
    +if the file is in playback or not. This helps cache to adapt how it queries
    +the cloud provider depending on what is needed for.
    +
    +Scans will have a minimum amount of workers (1) while in a confirmed playback cache
    +will deploy the configured number of workers.
    +
    +This integration opens the doorway to additional performance improvements
    +which will be explored in the near future.
    +
    +**Note:** If Plex options are not configured, `cache` will function with its
    +configured options without adapting any of its settings.
    +
    +How to enable? Run `rclone config` and add all the Plex options (endpoint, username
    +and password) in your remote and it will be automatically enabled.
    +
    +Affected settings:
    +- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times
    +
    +##### Certificate Validation #####
    +
    +When the Plex server is configured to only accept secure connections, it is
    +possible to use `.plex.direct` URLs to ensure certificate validation succeeds.
    +These URLs are used by Plex internally to connect to the Plex server securely.
    +
    +The format for these URLs is the following:
    +
    +`https://ip-with-dots-replaced.server-hash.plex.direct:32400/`
    +
    +The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
    +have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.
    +
    +To get the `server-hash` part, the easiest way is to visit
    +
    +https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
    +
    +This page will list all the available Plex servers for your account
    +with at least one `.plex.direct` link for each. Copy one URL and replace
    +the IP address with the desired address. This can be used as the
    +`plex_url` value.
    +
    +### Known issues ###
    +
    +#### Mount and --dir-cache-time ####
    +
    +--dir-cache-time controls the first layer of directory caching which works at the mount layer.
    +Being an independent caching mechanism from the `cache` backend, it will manage its own entries
    +based on the configured time.
    +
    +To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct
    +one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are
    +already configured in this way. 
    +
    +#### Windows support - Experimental ####
    +
    +There are a couple of issues with Windows `mount` functionality that still require some investigations.
    +It should be considered as experimental thus far as fixes come in for this OS.
    +
    +Most of the issues seem to be related to the difference between filesystems
    +on Linux flavors and Windows as cache is heavily dependent on them.
    +
    +Any reports or feedback on how cache behaves on this OS is greatly appreciated.
    + 
    +- https://github.com/rclone/rclone/issues/1935
    +- https://github.com/rclone/rclone/issues/1907
    +- https://github.com/rclone/rclone/issues/1834 
    +
    +#### Risk of throttling ####
    +
    +Future iterations of the cache backend will make use of the pooling functionality
    +of the cloud provider to synchronize and at the same time make writing through it
    +more tolerant to failures. 
    +
    +There are a couple of enhancements in track to add these but in the meantime
    +there is a valid concern that the expiring cache listings can lead to cloud provider
    +throttles or bans due to repeated queries on it for very large mounts.
    +
    +Some recommendations:
    +- don't use a very small interval for entry information (`--cache-info-age`)
    +- while writes aren't yet optimised, you can still write through `cache` which gives you the advantage
    +of adding the file in the cache at the same time if configured to do so.
    +
    +Future enhancements:
    +
    +- https://github.com/rclone/rclone/issues/1937
    +- https://github.com/rclone/rclone/issues/1936 
    +
    +#### cache and crypt ####
    +
    +One common scenario is to keep your data encrypted in the cloud provider
    +using the `crypt` remote. `crypt` uses a similar technique to wrap around
    +an existing remote and handles this translation in a seamless way.
    +
    +There is an issue with wrapping the remotes in this order:
    +**cloud remote** -> **crypt** -> **cache**
    +
    +During testing, I experienced a lot of bans with the remotes in this order.
    +I suspect it might be related to how crypt opens files on the cloud provider
    +which makes it think we're downloading the full file instead of small chunks.
    +Organizing the remotes in this order yields better results:
    +**cloud remote** -> **cache** -> **crypt**
    +
    +#### absolute remote paths ####
    +
    +`cache` can not differentiate between relative and absolute paths for the wrapped remote.
    +Any path given in the `remote` config setting and on the command line will be passed to
    +the wrapped remote as is, but for storing the chunks on disk the path will be made
    +relative by removing any leading `/` character.
    +
    +This behavior is irrelevant for most backend types, but there are backends where a leading `/`
    +changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are
    +relative to the root of the SSH server and paths without are relative to the user home directory.
    +As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent
    +a different directory on the SSH server.
    +
    +### Cache and Remote Control (--rc) ###
    +Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points:
    +By default, the listener is disabled if you do not add the flag.
    +
    +### rc cache/expire
    +Purge a remote from the cache backend. Supports either a directory or a file.
    +It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
    +
    +Params:
    +  - **remote** = path to remote **(required)**
    +  - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to cache (Cache a remote).
    +
    +#### --cache-remote
    +
    +Remote to cache.
    +
    +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
    +"myremote:bucket" or maybe "myremote:" (not recommended).
    +
    +Properties:
    +
    +- Config:      remote
    +- Env Var:     RCLONE_CACHE_REMOTE
    +- Type:        string
    +- Required:    true
    +
    +#### --cache-plex-url
    +
    +The URL of the Plex server.
    +
    +Properties:
    +
    +- Config:      plex_url
    +- Env Var:     RCLONE_CACHE_PLEX_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-plex-username
    +
    +The username of the Plex user.
    +
    +Properties:
    +
    +- Config:      plex_username
    +- Env Var:     RCLONE_CACHE_PLEX_USERNAME
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-plex-password
    +
    +The password of the Plex user.
    +
    +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    +
    +Properties:
    +
    +- Config:      plex_password
    +- Env Var:     RCLONE_CACHE_PLEX_PASSWORD
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-chunk-size
    +
    +The size of a chunk (partial file data).
    +
    +Use lower numbers for slower connections. If the chunk size is
    +changed, any downloaded chunks will be invalid and cache-chunk-path
    +will need to be cleared or unexpected EOF errors will occur.
    +
    +Properties:
    +
    +- Config:      chunk_size
    +- Env Var:     RCLONE_CACHE_CHUNK_SIZE
    +- Type:        SizeSuffix
    +- Default:     5Mi
    +- Examples:
    +    - "1M"
    +        - 1 MiB
    +    - "5M"
    +        - 5 MiB
    +    - "10M"
    +        - 10 MiB
    +
    +#### --cache-info-age
    +
    +How long to cache file structure information (directory listings, file size, times, etc.). 
    +If all write operations are done through the cache then you can safely make
    +this value very large as the cache store will also be updated in real time.
    +
    +Properties:
    +
    +- Config:      info_age
    +- Env Var:     RCLONE_CACHE_INFO_AGE
    +- Type:        Duration
    +- Default:     6h0m0s
    +- Examples:
    +    - "1h"
    +        - 1 hour
    +    - "24h"
    +        - 24 hours
    +    - "48h"
    +        - 48 hours
    +
    +#### --cache-chunk-total-size
    +
    +The total size that the chunks can take up on the local disk.
    +
    +If the cache exceeds this value then it will start to delete the
    +oldest chunks until it goes under this value.
    +
    +Properties:
    +
    +- Config:      chunk_total_size
    +- Env Var:     RCLONE_CACHE_CHUNK_TOTAL_SIZE
    +- Type:        SizeSuffix
    +- Default:     10Gi
    +- Examples:
    +    - "500M"
    +        - 500 MiB
    +    - "1G"
    +        - 1 GiB
    +    - "10G"
    +        - 10 GiB
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to cache (Cache a remote).
    +
    +#### --cache-plex-token
    +
    +The plex token for authentication - auto set normally.
    +
    +Properties:
    +
    +- Config:      plex_token
    +- Env Var:     RCLONE_CACHE_PLEX_TOKEN
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-plex-insecure
    +
    +Skip all certificate verification when connecting to the Plex server.
    +
    +Properties:
    +
    +- Config:      plex_insecure
    +- Env Var:     RCLONE_CACHE_PLEX_INSECURE
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-db-path
    +
    +Directory to store file structure metadata DB.
    +
    +The remote name is used as the DB file name.
    +
    +Properties:
    +
    +- Config:      db_path
    +- Env Var:     RCLONE_CACHE_DB_PATH
    +- Type:        string
    +- Default:     "$HOME/.cache/rclone/cache-backend"
    +
    +#### --cache-chunk-path
    +
    +Directory to cache chunk files.
    +
    +Path to where partial file data (chunks) are stored locally. The remote
    +name is appended to the final path.
    +
    +This config follows the "--cache-db-path". If you specify a custom
    +location for "--cache-db-path" and don't specify one for "--cache-chunk-path"
    +then "--cache-chunk-path" will use the same path as "--cache-db-path".
    +
    +Properties:
    +
    +- Config:      chunk_path
    +- Env Var:     RCLONE_CACHE_CHUNK_PATH
    +- Type:        string
    +- Default:     "$HOME/.cache/rclone/cache-backend"
    +
    +#### --cache-db-purge
    +
    +Clear all the cached data for this remote on start.
    +
    +Properties:
    +
    +- Config:      db_purge
    +- Env Var:     RCLONE_CACHE_DB_PURGE
    +- Type:        bool
    +- Default:     false
    +
    +#### --cache-chunk-clean-interval
    +
    +How often should the cache perform cleanups of the chunk storage.
    +
    +The default value should be ok for most people. If you find that the
    +cache goes over "cache-chunk-total-size" too often then try to lower
    +this value to force it to perform cleanups more often.
    +
    +Properties:
    +
    +- Config:      chunk_clean_interval
    +- Env Var:     RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
    +- Type:        Duration
    +- Default:     1m0s
    +
    +#### --cache-read-retries
    +
    +How many times to retry a read from a cache storage.
    +
    +Since reading from a cache stream is independent from downloading file
    +data, readers can get to a point where there's no more data in the
    +cache.  Most of the times this can indicate a connectivity issue if
    +cache isn't able to provide file data anymore.
    +
    +For really slow connections, increase this to a point where the stream is
    +able to provide data but your experience will be very stuttering.
    +
    +Properties:
    +
    +- Config:      read_retries
    +- Env Var:     RCLONE_CACHE_READ_RETRIES
    +- Type:        int
    +- Default:     10
    +
    +#### --cache-workers
    +
    +How many workers should run in parallel to download chunks.
    +
    +Higher values will mean more parallel processing (better CPU needed)
    +and more concurrent requests on the cloud provider.  This impacts
    +several aspects like the cloud provider API limits, more stress on the
    +hardware that rclone runs on but it also means that streams will be
    +more fluid and data will be available much more faster to readers.
    +
    +**Note**: If the optional Plex integration is enabled then this
    +setting will adapt to the type of reading performed and the value
    +specified here will be used as a maximum number of workers to use.
    +
    +Properties:
    +
    +- Config:      workers
    +- Env Var:     RCLONE_CACHE_WORKERS
    +- Type:        int
    +- Default:     4
    +
    +#### --cache-chunk-no-memory
    +
    +Disable the in-memory cache for storing chunks during streaming.
    +
    +By default, cache will keep file data during streaming in RAM as well
    +to provide it to readers as fast as possible.
    +
    +This transient data is evicted as soon as it is read and the number of
    +chunks stored doesn't exceed the number of workers. However, depending
    +on other settings like "cache-chunk-size" and "cache-workers" this footprint
    +can increase if there are parallel streams too (multiple files being read
    +at the same time).
    +
    +If the hardware permits it, use this feature to provide an overall better
    +performance during streaming but it can also be disabled if RAM is not
    +available on the local machine.
    +
    +Properties:
    +
    +- Config:      chunk_no_memory
    +- Env Var:     RCLONE_CACHE_CHUNK_NO_MEMORY
    +- Type:        bool
    +- Default:     false
    +
    +#### --cache-rps
    +
    +Limits the number of requests per second to the source FS (-1 to disable).
    +
    +This setting places a hard limit on the number of requests per second
    +that cache will be doing to the cloud provider remote and try to
    +respect that value by setting waits between reads.
    +
    +If you find that you're getting banned or limited on the cloud
    +provider through cache and know that a smaller number of requests per
    +second will allow you to work with it then you can use this setting
    +for that.
    +
    +A good balance of all the other settings should make this setting
    +useless but it is available to set for more special cases.
    +
    +**NOTE**: This will limit the number of requests during streams but
    +other API calls to the cloud provider like directory listings will
    +still pass.
    +
    +Properties:
    +
    +- Config:      rps
    +- Env Var:     RCLONE_CACHE_RPS
    +- Type:        int
    +- Default:     -1
    +
    +#### --cache-writes
    +
    +Cache file data on writes through the FS.
    +
    +If you need to read files immediately after you upload them through
    +cache you can enable this flag to have their data stored in the
    +cache store at the same time during upload.
    +
    +Properties:
    +
    +- Config:      writes
    +- Env Var:     RCLONE_CACHE_WRITES
    +- Type:        bool
    +- Default:     false
    +
    +#### --cache-tmp-upload-path
    +
    +Directory to keep temporary files until they are uploaded.
    +
    +This is the path where cache will use as a temporary storage for new
    +files that need to be uploaded to the cloud provider.
    +
    +Specifying a value will enable this feature. Without it, it is
    +completely disabled and files will be uploaded directly to the cloud
    +provider
    +
    +Properties:
    +
    +- Config:      tmp_upload_path
    +- Env Var:     RCLONE_CACHE_TMP_UPLOAD_PATH
    +- Type:        string
    +- Required:    false
    +
    +#### --cache-tmp-wait-time
    +
    +How long should files be stored in local cache before being uploaded.
    +
    +This is the duration that a file must wait in the temporary location
    +_cache-tmp-upload-path_ before it is selected for upload.
    +
    +Note that only one file is uploaded at a time and it can take longer
    +to start the upload if a queue formed for this purpose.
    +
    +Properties:
    +
    +- Config:      tmp_wait_time
    +- Env Var:     RCLONE_CACHE_TMP_WAIT_TIME
    +- Type:        Duration
    +- Default:     15s
    +
    +#### --cache-db-wait-time
    +
    +How long to wait for the DB to be available - 0 is unlimited.
    +
    +Only one process can have the DB open at any one time, so rclone waits
    +for this duration for the DB to become available before it gives an
    +error.
    +
    +If you set it to 0 then it will wait forever.
    +
    +Properties:
    +
    +- Config:      db_wait_time
    +- Env Var:     RCLONE_CACHE_DB_WAIT_TIME
    +- Type:        Duration
    +- Default:     1s
    +
    +## Backend commands
    +
    +Here are the commands specific to the cache backend.
    +
    +Run them with
    +
    +    rclone backend COMMAND remote:
    +
    +The help below will explain what arguments each command takes.
    +
    +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
    +info on how to pass options and arguments.
    +
    +These can be run on a running backend using the rc command
    +[backend/command](https://rclone.org/rc/#backend-command).
    +
    +### stats
    +
    +Print stats on the cache backend in JSON format.
    +
    +    rclone backend stats remote: [options] [<arguments>+]
    +
    +
    +
    +#  Chunker
    +
    +The `chunker` overlay transparently splits large files into smaller chunks
    +during upload to wrapped remote and transparently assembles them back
    +when the file is downloaded. This allows to effectively overcome size limits
    +imposed by storage providers.
    +
    +## Configuration
    +
    +To use it, first set up the underlying remote following the configuration
    +instructions for that remote. You can also use a local pathname instead of
    +a remote.
    +
    +First check your chosen remote is working - we'll call it `remote:path` here.
    +Note that anything inside `remote:path` will be chunked and anything outside
    +won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift)
    +then you should probably put the bucket in the remote `s3:bucket`.
    +
    +Now configure `chunker` using `rclone config`. We will call this one `overlay`
    +to separate it from the `remote` itself.
    +
    +

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> overlay Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Transparently chunk/split large files  "chunker" [snip] Storage> chunker Remote to chunk/unchunk. Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path Files larger than chunk size will be split in chunks. Enter a size with suffix K,M,G,T. Press Enter for the default ("2G"). chunk_size> 100M Choose how chunker handles hash sums. All modes but "none" require metadata. Enter a string value. Press Enter for the default ("md5"). Choose a number from below, or type in your own value 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise  "none" 2 / MD5 for composite files  "md5" 3 / SHA1 for composite files  "sha1" 4 / MD5 for all files  "md5all" 5 / SHA1 for all files  "sha1all" 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported  "md5quick" 7 / Similar to "md5quick" but prefers SHA1 over MD5  "sha1quick" hash_type> md5 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [overlay] type = chunker remote = remote:bucket chunk_size = 100M hash_type = md5 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +### Specifying the remote
    +
    +In normal use, make sure the remote has a `:` in. If you specify the remote
    +without a `:` then rclone will use a local directory of that name.
    +So if you use a remote of `/path/to/secret/files` then rclone will
    +chunk stuff in that directory. If you use a remote of `name` then rclone
    +will put files in a directory called `name` in the current directory.
    +
    +
    +### Chunking
    +
    +When rclone starts a file upload, chunker checks the file size. If it
    +doesn't exceed the configured chunk size, chunker will just pass the file
    +to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut
    +data in pieces with temporary names and stream them one by one, on the fly.
    +Each data chunk will contain the specified number of bytes, except for the
    +last one which may have less data. If file size is unknown in advance
    +(this is called a streaming upload), chunker will internally create
    +a temporary copy, record its size and repeat the above process.
    +
    +When upload completes, temporary chunk files are finally renamed.
    +This scheme guarantees that operations can be run in parallel and look
    +from outside as atomic.
    +A similar method with hidden temporary chunks is used for other operations
    +(copy/move/rename, etc.). If an operation fails, hidden chunks are normally
    +destroyed, and the target composite file stays intact.
    +
    +When a composite file download is requested, chunker transparently
    +assembles it by concatenating data chunks in order. As the split is trivial
    +one could even manually concatenate data chunks together to obtain the
    +original content.
    +
    +When the `list` rclone command scans a directory on wrapped remote,
    +the potential chunk files are accounted for, grouped and assembled into
    +composite directory entries. Any temporary chunks are hidden.
    +
    +List and other commands can sometimes come across composite files with
    +missing or invalid chunks, e.g. shadowed by like-named directory or
    +another file. This usually means that wrapped file system has been directly
    +tampered with or damaged. If chunker detects a missing chunk it will
    +by default print warning, skip the whole incomplete group of chunks but
    +proceed with current command.
    +You can set the `--chunker-fail-hard` flag to have commands abort with
    +error message in such cases.
    +
    +**Caveat**: As it is now, chunker will always create a temporary file in the 
    +backend and then rename it, even if the file is below the chunk threshold.
    +This will result in unnecessary API calls and can severely restrict throughput
    +when handling transfers primarily composed of small files on some backends (e.g. Box).
    +A workaround to this issue is to use chunker only for files above the chunk threshold
    +via `--min-size` and then perform a separate call without chunker on the remaining
    +files. 
    +
    +
    +#### Chunk names
    +
    +The default chunk name format is `*.rclone_chunk.###`, hence by default
    +chunk names are `BIG_FILE_NAME.rclone_chunk.001`,
    +`BIG_FILE_NAME.rclone_chunk.002` etc. You can configure another name format
    +using the `name_format` configuration file option. The format uses asterisk
    +`*` as a placeholder for the base file name and one or more consecutive
    +hash characters `#` as a placeholder for sequential chunk number.
    +There must be one and only one asterisk. The number of consecutive hash
    +characters defines the minimum length of a string representing a chunk number.
    +If decimal chunk number has less digits than the number of hashes, it is
    +left-padded by zeros. If the decimal string is longer, it is left intact.
    +By default numbering starts from 1 but there is another option that allows
    +user to start from 0, e.g. for compatibility with legacy software.
    +
    +For example, if name format is `big_*-##.part` and original file name is
    +`data.txt` and numbering starts from 0, then the first chunk will be named
    +`big_data.txt-00.part`, the 99th chunk will be `big_data.txt-98.part`
    +and the 302nd chunk will become `big_data.txt-301.part`.
    +
    +Note that `list` assembles composite directory entries only when chunk names
    +match the configured format and treats non-conforming file names as normal
    +non-chunked files.
    +
    +When using `norename` transactions, chunk names will additionally have a unique
    +file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`.
    +
    +
    +### Metadata
    +
    +Besides data chunks chunker will by default create metadata object for
    +a composite file. The object is named after the original file.
    +Chunker allows user to disable metadata completely (the `none` format).
    +Note that metadata is normally not created for files smaller than the
    +configured chunk size. This may change in future rclone releases.
    +
    +#### Simple JSON metadata format
    +
    +This is the default format. It supports hash sums and chunk validation
    +for composite files. Meta objects carry the following fields:
    +
    +- `ver`     - version of format, currently `1`
    +- `size`    - total size of composite file
    +- `nchunks` - number of data chunks in file
    +- `md5`     - MD5 hashsum of composite file (if present)
    +- `sha1`    - SHA1 hashsum (if present)
    +- `txn`     - identifies current version of the file
    +
    +There is no field for composite file name as it's simply equal to the name
    +of meta object on the wrapped remote. Please refer to respective sections
    +for details on hashsums and modified time handling.
    +
    +#### No metadata
    +
    +You can disable meta objects by setting the meta format option to `none`.
    +In this mode chunker will scan directory for all files that follow
    +configured chunk name format, group them by detecting chunks with the same
    +base name and show group names as virtual composite files.
    +This method is more prone to missing chunk errors (especially missing
    +last chunk) than format with metadata enabled.
    +
    +
    +### Hashsums
    +
    +Chunker supports hashsums only when a compatible metadata is present.
    +Hence, if you choose metadata format of `none`, chunker will report hashsum
    +as `UNSUPPORTED`.
    +
    +Please note that by default metadata is stored only for composite files.
    +If a file is smaller than configured chunk size, chunker will transparently
    +redirect hash requests to wrapped remote, so support depends on that.
    +You will see the empty string as a hashsum of requested type for small
    +files if the wrapped remote doesn't support it.
    +
    +Many storage backends support MD5 and SHA1 hash types, so does chunker.
    +With chunker you can choose one or another but not both.
    +MD5 is set by default as the most supported type.
    +Since chunker keeps hashes for composite files and falls back to the
    +wrapped remote hash for non-chunked ones, we advise you to choose the same
    +hash type as supported by wrapped remote so that your file listings
    +look coherent.
    +
    +If your storage backend does not support MD5 or SHA1 but you need consistent
    +file hashing, configure chunker with `md5all` or `sha1all`. These two modes
    +guarantee given hash for all files. If wrapped remote doesn't support it,
    +chunker will then add metadata to all files, even small. However, this can
    +double the amount of small files in storage and incur additional service charges.
    +You can even use chunker to force md5/sha1 support in any other remote
    +at expense of sidecar meta objects by setting e.g. `hash_type=sha1all`
    +to force hashsums and `chunk_size=1P` to effectively disable chunking.
    +
    +Normally, when a file is copied to chunker controlled remote, chunker
    +will ask the file source for compatible file hash and revert to on-the-fly
    +calculation if none is found. This involves some CPU overhead but provides
    +a guarantee that given hashsum is available. Also, chunker will reject
    +a server-side copy or move operation if source and destination hashsum
    +types are different resulting in the extra network bandwidth, too.
    +In some rare cases this may be undesired, so chunker provides two optional
    +choices: `sha1quick` and `md5quick`. If the source does not support primary
    +hash type and the quick mode is enabled, chunker will try to fall back to
    +the secondary type. This will save CPU and bandwidth but can result in empty
    +hashsums at destination. Beware of consequences: the `sync` command will
    +revert (sometimes silently) to time/size comparison if compatible hashsums
    +between source and target are not found.
    +
    +
    +### Modified time
    +
    +Chunker stores modification times using the wrapped remote so support
    +depends on that. For a small non-chunked file the chunker overlay simply
    +manipulates modification time of the wrapped remote file.
    +For a composite file with metadata chunker will get and set
    +modification time of the metadata object on the wrapped remote.
    +If file is chunked but metadata format is `none` then chunker will
    +use modification time of the first data chunk.
    +
    +
    +### Migrations
    +
    +The idiomatic way to migrate to a different chunk size, hash type, transaction
    +style or chunk naming scheme is to:
    +
    +- Collect all your chunked files under a directory and have your
    +  chunker remote point to it.
    +- Create another directory (most probably on the same cloud storage)
    +  and configure a new remote with desired metadata format,
    +  hash type, chunk naming etc.
    +- Now run `rclone sync --interactive oldchunks: newchunks:` and all your data
    +  will be transparently converted in transfer.
    +  This may take some time, yet chunker will try server-side
    +  copy if possible.
    +- After checking data integrity you may remove configuration section
    +  of the old remote.
    +
    +If rclone gets killed during a long operation on a big composite file,
    +hidden temporary chunks may stay in the directory. They will not be
    +shown by the `list` command but will eat up your account quota.
    +Please note that the `deletefile` command deletes only active
    +chunks of a file. As a workaround, you can use remote of the wrapped
    +file system to see them.
    +An easy way to get rid of hidden garbage is to copy littered directory
    +somewhere using the chunker remote and purge the original directory.
    +The `copy` command will copy only active chunks while the `purge` will
    +remove everything including garbage.
    +
    +
    +### Caveats and Limitations
    +
    +Chunker requires wrapped remote to support server-side `move` (or `copy` +
    +`delete`) operations, otherwise it will explicitly refuse to start.
    +This is because it internally renames temporary chunk files to their final
    +names when an operation completes successfully.
    +
    +Chunker encodes chunk number in file name, so with default `name_format`
    +setting it adds 17 characters. Also chunker adds 7 characters of temporary
    +suffix during operations. Many file systems limit base file name without path
    +by 255 characters. Using rclone's crypt remote as a base file system limits
    +file name by 143 characters. Thus, maximum name length is 231 for most files
    +and 119 for chunker-over-crypt. A user in need can change name format to
    +e.g. `*.rcc##` and save 10 characters (provided at most 99 chunks per file).
    +
    +Note that a move implemented using the copy-and-delete method may incur
    +double charging with some cloud storage providers.
    +
    +Chunker will not automatically rename existing chunks when you run
    +`rclone config` on a live remote and change the chunk name format.
    +Beware that in result of this some files which have been treated as chunks
    +before the change can pop up in directory listings as normal files
    +and vice versa. The same warning holds for the chunk size.
    +If you desperately need to change critical chunking settings, you should
    +run data migration as described above.
    +
    +If wrapped remote is case insensitive, the chunker overlay will inherit
    +that property (so you can't have a file called "Hello.doc" and "hello.doc"
    +in the same directory).
    +
    +Chunker included in rclone releases up to `v1.54` can sometimes fail to
    +detect metadata produced by recent versions of rclone. We recommend users
    +to keep rclone up-to-date to avoid data corruption.
    +
    +Changing `transactions` is dangerous and requires explicit migration.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to chunker (Transparently chunk/split large files).
    +
    +#### --chunker-remote
    +
    +Remote to chunk/unchunk.
    +
    +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
    +"myremote:bucket" or maybe "myremote:" (not recommended).
    +
    +Properties:
    +
    +- Config:      remote
    +- Env Var:     RCLONE_CHUNKER_REMOTE
    +- Type:        string
    +- Required:    true
    +
    +#### --chunker-chunk-size
    +
    +Files larger than chunk size will be split in chunks.
    +
    +Properties:
    +
    +- Config:      chunk_size
    +- Env Var:     RCLONE_CHUNKER_CHUNK_SIZE
    +- Type:        SizeSuffix
    +- Default:     2Gi
    +
    +#### --chunker-hash-type
    +
    +Choose how chunker handles hash sums.
    +
    +All modes but "none" require metadata.
    +
    +Properties:
    +
    +- Config:      hash_type
    +- Env Var:     RCLONE_CHUNKER_HASH_TYPE
    +- Type:        string
    +- Default:     "md5"
    +- Examples:
    +    - "none"
    +        - Pass any hash supported by wrapped remote for non-chunked files.
    +        - Return nothing otherwise.
    +    - "md5"
    +        - MD5 for composite files.
    +    - "sha1"
    +        - SHA1 for composite files.
    +    - "md5all"
    +        - MD5 for all files.
    +    - "sha1all"
    +        - SHA1 for all files.
    +    - "md5quick"
    +        - Copying a file to chunker will request MD5 from the source.
    +        - Falling back to SHA1 if unsupported.
    +    - "sha1quick"
    +        - Similar to "md5quick" but prefers SHA1 over MD5.
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to chunker (Transparently chunk/split large files).
    +
    +#### --chunker-name-format
    +
    +String format of chunk file names.
    +
    +The two placeholders are: base file name (*) and chunk number (#...).
    +There must be one and only one asterisk and one or more consecutive hash characters.
    +If chunk number has less digits than the number of hashes, it is left-padded by zeros.
    +If there are more digits in the number, they are left as is.
    +Possible chunk files are ignored if their name does not match given format.
    +
    +Properties:
    +
    +- Config:      name_format
    +- Env Var:     RCLONE_CHUNKER_NAME_FORMAT
    +- Type:        string
    +- Default:     "*.rclone_chunk.###"
    +
    +#### --chunker-start-from
    +
    +Minimum valid chunk number. Usually 0 or 1.
    +
    +By default chunk numbers start from 1.
    +
    +Properties:
    +
    +- Config:      start_from
    +- Env Var:     RCLONE_CHUNKER_START_FROM
    +- Type:        int
    +- Default:     1
    +
    +#### --chunker-meta-format
    +
    +Format of the metadata object or "none".
    +
    +By default "simplejson".
    +Metadata is a small JSON file named after the composite file.
    +
    +Properties:
    +
    +- Config:      meta_format
    +- Env Var:     RCLONE_CHUNKER_META_FORMAT
    +- Type:        string
    +- Default:     "simplejson"
    +- Examples:
    +    - "none"
    +        - Do not use metadata files at all.
    +        - Requires hash type "none".
    +    - "simplejson"
    +        - Simple JSON supports hash sums and chunk validation.
    +        - 
    +        - It has the following fields: ver, size, nchunks, md5, sha1.
    +
    +#### --chunker-fail-hard
    +
    +Choose how chunker should handle files with missing or invalid chunks.
    +
    +Properties:
    +
    +- Config:      fail_hard
    +- Env Var:     RCLONE_CHUNKER_FAIL_HARD
    +- Type:        bool
    +- Default:     false
    +- Examples:
    +    - "true"
    +        - Report errors and abort current command.
    +    - "false"
    +        - Warn user, skip incomplete file and proceed.
    +
    +#### --chunker-transactions
    +
    +Choose how chunker should handle temporary files during transactions.
    +
    +Properties:
    +
    +- Config:      transactions
    +- Env Var:     RCLONE_CHUNKER_TRANSACTIONS
    +- Type:        string
    +- Default:     "rename"
    +- Examples:
    +    - "rename"
    +        - Rename temporary files after a successful transaction.
    +    - "norename"
    +        - Leave temporary file names and write transaction ID to metadata file.
    +        - Metadata is required for no rename transactions (meta format cannot be "none").
    +        - If you are using norename transactions you should be careful not to downgrade Rclone
    +        - as older versions of Rclone don't support this transaction style and will misinterpret
    +        - files manipulated by norename transactions.
    +        - This method is EXPERIMENTAL, don't use on production systems.
    +    - "auto"
    +        - Rename or norename will be used depending on capabilities of the backend.
    +        - If meta format is set to "none", rename transactions will always be used.
    +        - This method is EXPERIMENTAL, don't use on production systems.
    +
    +
    +
    +#  Citrix ShareFile
    +
    +[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
    +
    +## Configuration
    +
    +The initial setup for Citrix ShareFile involves getting a token from
    +Citrix ShareFile which you can in your browser.  `rclone config` walks you
    +through it.
    +
    +Here is an example of how to make a remote called `remote`.  First run:
    +
    +     rclone config
    +
    +This will guide you through an interactive setup process:
    +
    +

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value XX / Citrix Sharefile  "sharefile" Storage> sharefile ** See help for sharefile backend at: https://rclone.org/sharefile/ **

    +

    ID of the root folder

    +

    Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Access the Personal Folders. (Default)  "" 2 / Access the Favorites folder.  "favorites" 3 / Access all the shared folders.  "allshared" 4 / Access all the individual connectors.  "connectors" 5 / Access the home, favorites, and shared folders as well as the connectors.  "top" root_folder_id> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = sharefile endpoint = https://XXX.sharefile.com token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
    +machine with no Internet browser available.
    +
    +Note that rclone runs a webserver on your local machine to collect the
    +token as returned from Citrix ShareFile. This only runs from the moment it opens
    +your browser to the moment you get back the verification code.  This
    +is on `http://127.0.0.1:53682/` and this it may require you to unblock
    +it temporarily if you are running a host firewall.
    +
    +Once configured you can then use `rclone` like this,
    +
    +List directories in top level of your ShareFile
    +
    +    rclone lsd remote:
    +
    +List all the files in your ShareFile
    +
    +    rclone ls remote:
    +
    +To copy a local directory to an ShareFile directory called backup
    +
    +    rclone copy /home/source remote:backup
    +
    +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    +
    +### Modified time and hashes
    +
    +ShareFile allows modification times to be set on objects accurate to 1
    +second.  These will be used to detect whether objects need syncing or
    +not.
    +
    +ShareFile supports MD5 type hashes, so you can use the `--checksum`
    +flag.
    +
    +### Transfers
    +
    +For files above 128 MiB rclone will use a chunked transfer.  Rclone will
    +upload up to `--transfers` chunks at the same time (shared among all
    +the multipart uploads).  Chunks are buffered in memory and are
    +normally 64 MiB so increasing `--transfers` will increase memory use.
    +
    +### Restricted filename characters
    +
    +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    +the following characters are also replaced:
    +
    +| Character | Value | Replacement |
    +| --------- |:-----:|:-----------:|
    +| \\        | 0x5C  | \           |
    +| *         | 0x2A  | *           |
    +| <         | 0x3C  | <           |
    +| >         | 0x3E  | >           |
    +| ?         | 0x3F  | ?           |
    +| :         | 0x3A  | :           |
    +| \|        | 0x7C  | |           |
    +| "         | 0x22  | "           |
    +
    +File names can also not start or end with the following characters.
    +These only get replaced if they are the first or last character in the
    +name:
    +
    +| Character | Value | Replacement |
    +| --------- |:-----:|:-----------:|
    +| SP        | 0x20  | ␠           |
    +| .         | 0x2E  | .           |
    +
    +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    +as they can't be used in JSON strings.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to sharefile (Citrix Sharefile).
    +
    +#### --sharefile-client-id
    +
    +OAuth Client Id.
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      client_id
    +- Env Var:     RCLONE_SHAREFILE_CLIENT_ID
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-client-secret
    +
    +OAuth Client Secret.
    +
    +Leave blank normally.
    +
    +Properties:
    +
    +- Config:      client_secret
    +- Env Var:     RCLONE_SHAREFILE_CLIENT_SECRET
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-root-folder-id
    +
    +ID of the root folder.
     
     Leave blank to access "Personal Folders".  You can use one of the
     standard values here or any folder ID (long hex number ID).
    -Enter a string value. Press Enter for the default ("").
    -Choose a number from below, or type in your own value
    - 1 / Access the Personal Folders. (Default)
    -   \ ""
    - 2 / Access the Favorites folder.
    -   \ "favorites"
    - 3 / Access all the shared folders.
    -   \ "allshared"
    - 4 / Access all the individual connectors.
    -   \ "connectors"
    - 5 / Access the home, favorites, and shared folders as well as the connectors.
    -   \ "top"
    -root_folder_id> 
    -Edit advanced config? (y/n)
    -y) Yes
    -n) No
    -y/n> n
    -Remote config
    -Use web browser to automatically authenticate rclone with remote?
    - * Say Y if the machine running rclone has a web browser you can use
    - * Say N if running rclone on a (remote) machine without web browser access
    -If not sure try Y. If Y failed, try N.
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -type = sharefile
    -endpoint = https://XXX.sharefile.com
    -token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List directories in top level of your ShareFile

    -
    rclone lsd remote:
    -

    List all the files in your ShareFile

    -
    rclone ls remote:
    -

    To copy a local directory to an ShareFile directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Modified time and hashes

    -

    ShareFile allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    ShareFile supports MD5 type hashes, so you can use the --checksum flag.

    -

    Transfers

    -

    For files above 128 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64 MiB so increasing --transfers will increase memory use.

    -

    Restricted filename characters

    -

    In addition to the default restricted characters set the following characters are also replaced:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    \0x5C
    *0x2A
    <0x3C
    >0x3E
    ?0x3F
    :0x3A
    |0x7C
    "0x22
    -

    File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name:

    - - - - - - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    SP0x20
    .0x2E
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the Standard options specific to sharefile (Citrix Sharefile).

    -

    --sharefile-root-folder-id

    -

    ID of the root folder.

    -

    Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to sharefile (Citrix Sharefile).

    -

    --sharefile-upload-cutoff

    -

    Cutoff for switching to multipart upload.

    -

    Properties:

    - -

    --sharefile-chunk-size

    -

    Upload chunk size.

    -

    Must a power of 2 >= 256k.

    -

    Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

    -

    Reducing this will reduce memory usage but decrease performance.

    -

    Properties:

    - -

    --sharefile-endpoint

    -

    Endpoint for API calls.

    -

    This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com

    -

    Properties:

    - -

    --sharefile-encoding

    -

    The encoding for the backend.

    -

    See the encoding section in the overview for more info.

    -

    Properties:

    - -

    Limitations

    -

    Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    ShareFile only supports filenames up to 256 characters in length.

    -

    rclone about is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about and rclone about

    -

    Crypt

    -

    Rclone crypt remotes encrypt and decrypt other remotes.

    -

    A remote of type crypt does not access a storage system directly, but instead wraps another remote, which in turn accesses the storage system. This is similar to how alias, union, chunker and a few others work. It makes the usage very flexible, as you can add a layer, in this case an encryption layer, on top of any other backend, even in multiple layers. Rclone's functionality can be used as with any other remote, for example you can mount a crypt remote.

    -

    Accessing a storage system through a crypt remote realizes client-side encryption, which makes it safe to keep your data in a location you do not trust will not get compromised. When working against the crypt remote, rclone will automatically encrypt (before uploading) and decrypt (after downloading) on your local system as needed on the fly, leaving the data encrypted at rest in the wrapped remote. If you access the storage system using an application other than rclone, or access the wrapped remote directly using rclone, there will not be any encryption/decryption: Downloading existing content will just give you the encrypted (scrambled) format, and anything you upload will not become encrypted.

    -

    The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password.

    -

    Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized.

    -

    File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off.

    -

    Configuration

    -

    Here is an example of how to make a remote called secret.

    -

    To use crypt, first set up the underlying remote. Follow the rclone config instructions for the specific backend.

    -

    Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote. We will configure a path path within this remote to contain the encrypted content. Anything inside remote:path will be encrypted and anything outside will not.

    -

    Configure crypt using rclone config. In this example the crypt remote is called secret, to differentiate it from the underlying remote.

    -

    When you are done you can use the crypt remote named secret just as you would with any other remote, e.g. rclone copy D:\docs secret:\docs, and rclone will encrypt and decrypt as needed on the fly. If you access the wrapped remote remote:path directly you will bypass the encryption, and anything you read will be in encrypted form, and anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.

    -
    No remotes found, make a new one?
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> secret
    -Type of storage to configure.
    -Enter a string value. Press Enter for the default ("").
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Encrypt/Decrypt a remote
    -   \ "crypt"
    -[snip]
    -Storage> crypt
    -** See help for crypt backend at: https://rclone.org/crypt/ **
    +
    +Properties:
    +
    +- Config:      root_folder_id
    +- Env Var:     RCLONE_SHAREFILE_ROOT_FOLDER_ID
    +- Type:        string
    +- Required:    false
    +- Examples:
    +    - ""
    +        - Access the Personal Folders (default).
    +    - "favorites"
    +        - Access the Favorites folder.
    +    - "allshared"
    +        - Access all the shared folders.
    +    - "connectors"
    +        - Access all the individual connectors.
    +    - "top"
    +        - Access the home, favorites, and shared folders as well as the connectors.
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to sharefile (Citrix Sharefile).
    +
    +#### --sharefile-token
    +
    +OAuth Access Token as a JSON blob.
    +
    +Properties:
    +
    +- Config:      token
    +- Env Var:     RCLONE_SHAREFILE_TOKEN
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-auth-url
    +
    +Auth server URL.
    +
    +Leave blank to use the provider defaults.
    +
    +Properties:
    +
    +- Config:      auth_url
    +- Env Var:     RCLONE_SHAREFILE_AUTH_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-token-url
    +
    +Token server url.
    +
    +Leave blank to use the provider defaults.
    +
    +Properties:
    +
    +- Config:      token_url
    +- Env Var:     RCLONE_SHAREFILE_TOKEN_URL
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-upload-cutoff
    +
    +Cutoff for switching to multipart upload.
    +
    +Properties:
    +
    +- Config:      upload_cutoff
    +- Env Var:     RCLONE_SHAREFILE_UPLOAD_CUTOFF
    +- Type:        SizeSuffix
    +- Default:     128Mi
    +
    +#### --sharefile-chunk-size
    +
    +Upload chunk size.
    +
    +Must a power of 2 >= 256k.
    +
    +Making this larger will improve performance, but note that each chunk
    +is buffered in memory one per transfer.
    +
    +Reducing this will reduce memory usage but decrease performance.
    +
    +Properties:
    +
    +- Config:      chunk_size
    +- Env Var:     RCLONE_SHAREFILE_CHUNK_SIZE
    +- Type:        SizeSuffix
    +- Default:     64Mi
    +
    +#### --sharefile-endpoint
    +
    +Endpoint for API calls.
    +
    +This is usually auto discovered as part of the oauth process, but can
    +be set manually to something like: https://XXX.sharefile.com
    +
    +
    +Properties:
    +
    +- Config:      endpoint
    +- Env Var:     RCLONE_SHAREFILE_ENDPOINT
    +- Type:        string
    +- Required:    false
    +
    +#### --sharefile-encoding
    +
    +The encoding for the backend.
    +
    +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    +
    +Properties:
    +
    +- Config:      encoding
    +- Env Var:     RCLONE_SHAREFILE_ENCODING
    +- Type:        MultiEncoder
    +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
    +
    +
    +## Limitations
    +
    +Note that ShareFile is case insensitive so you can't have a file called
    +"Hello.doc" and one called "hello.doc".
    +
    +ShareFile only supports filenames up to 256 characters in length.
    +
    +`rclone about` is not supported by the Citrix ShareFile backend. Backends without
    +this capability cannot determine free space for an rclone mount or
    +use policy `mfs` (most free space) as a member of an rclone union
    +remote.
    +
    +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
    +
    +#  Crypt
    +
    +Rclone `crypt` remotes encrypt and decrypt other remotes.
    +
    +A remote of type `crypt` does not access a [storage system](https://rclone.org/overview/)
    +directly, but instead wraps another remote, which in turn accesses
    +the storage system. This is similar to how [alias](https://rclone.org/alias/),
    +[union](https://rclone.org/union/), [chunker](https://rclone.org/chunker/)
    +and a few others work. It makes the usage very flexible, as you can
    +add a layer, in this case an encryption layer, on top of any other
    +backend, even in multiple layers. Rclone's functionality
    +can be used as with any other remote, for example you can
    +[mount](https://rclone.org/commands/rclone_mount/) a crypt remote.
    +
    +Accessing a storage system through a crypt remote realizes client-side
    +encryption, which makes it safe to keep your data in a location you do
    +not trust will not get compromised.
    +When working against the `crypt` remote, rclone will automatically
    +encrypt (before uploading) and decrypt (after downloading) on your local
    +system as needed on the fly, leaving the data encrypted at rest in the
    +wrapped remote. If you access the storage system using an application
    +other than rclone, or access the wrapped remote directly using rclone,
    +there will not be any encryption/decryption: Downloading existing content
    +will just give you the encrypted (scrambled) format, and anything you
    +upload will *not* become encrypted.
    +
    +The encryption is a secret-key encryption (also called symmetric key encryption)
    +algorithm, where a password (or pass phrase) is used to generate real encryption key.
    +The password can be supplied by user, or you may chose to let rclone
    +generate one. It will be stored in the configuration file, in a lightly obscured form.
    +If you are in an environment where you are not able to keep your configuration
    +secured, you should add
    +[configuration encryption](https://rclone.org/docs/#configuration-encryption)
    +as protection. As long as you have this configuration file, you will be able to
    +decrypt your data. Without the configuration file, as long as you remember
    +the password (or keep it in a safe place), you can re-create the configuration
    +and gain access to the existing data. You may also configure a corresponding
    +remote in a different installation to access the same data.
    +See below for guidance to [changing password](#changing-password).
    +
    +Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)),
    +to permute the encryption key so that the same string may be encrypted in
    +different ways. When configuring the crypt remote it is optional to enter a salt,
    +or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string.
    +Normally in cryptography, the salt is stored together with the encrypted content,
    +and do not have to be memorized by the user. This is not the case in rclone,
    +because rclone does not store any additional information on the remotes. Use of
    +custom salt is effectively a second password that must be memorized.
    +
    +[File content](#file-encryption) encryption is performed using
    +[NaCl SecretBox](https://godoc.org/golang.org/x/crypto/nacl/secretbox),
    +based on XSalsa20 cipher and Poly1305 for integrity.
    +[Names](#name-encryption) (file- and directory names) are also encrypted
    +by default, but this has some implications and is therefore
    +possible to be turned off.
    +
    +## Configuration
    +
    +Here is an example of how to make a remote called `secret`.
    +
    +To use `crypt`, first set up the underlying remote. Follow the
    +`rclone config` instructions for the specific backend.
    +
    +Before configuring the crypt remote, check the underlying remote is
    +working. In this example the underlying remote is called `remote`.
    +We will configure a path `path` within this remote to contain the
    +encrypted content. Anything inside `remote:path` will be encrypted
    +and anything outside will not.
    +
    +Configure `crypt` using `rclone config`. In this example the `crypt`
    +remote is called `secret`, to differentiate it from the underlying
    +`remote`.
    +
    +When you are done you can use the crypt remote named `secret` just
    +as you would with any other remote, e.g. `rclone copy D:\docs secret:\docs`,
    +and rclone will encrypt and decrypt as needed on the fly.
    +If you access the wrapped remote `remote:path` directly you will bypass
    +the encryption, and anything you read will be in encrypted form, and
    +anything you write will be unencrypted. To avoid issues it is best to
    +configure a dedicated path for encrypted content, and access it
    +exclusively through a crypt remote.
    +
    +

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> secret Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Encrypt/Decrypt a remote  "crypt" [snip] Storage> crypt ** See help for crypt backend at: https://rclone.org/crypt/ **

    +

    Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path How to encrypt the filenames. Enter a string value. Press Enter for the default ("standard"). Choose a number from below, or type in your own value. / Encrypt the filenames. 1 | See the docs for the details.  "standard" 2 / Very simple filename obfuscation.  "obfuscate" / Don't encrypt the file names. 3 | Adds a ".bin" extension only.  "off" filename_encryption> Option to either encrypt directory names or leave them intact.

    +

    NB If filename_encryption is "off" then this option will do nothing. Enter a boolean value (true or false). Press Enter for the default ("true"). Choose a number from below, or type in your own value 1 / Encrypt directory names.  "true" 2 / Don't encrypt directory names, leave them intact.  "false" directory_name_encryption> Password or pass phrase for encryption. y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> g Password strength in bits. 64 is just about memorable 128 is secure 1024 is the maximum Bits> 128 Your password is: JAsJvRcgR-_veXNfy_sGmQ Use this password? Please note that an obscured version of this password (and not the password itself) will be stored under your configuration file, so keep this generated password in a safe place. y) Yes (default) n) No y/n> Edit advanced config? (y/n) y) Yes n) No (default) y/n> Remote config -------------------- [secret] type = crypt remote = remote:path password = *** ENCRYPTED password2 = ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d>

    +
    
    +**Important** The crypt password stored in `rclone.conf` is lightly
    +obscured. That only protects it from cursory inspection. It is not
    +secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of `rclone.conf` is specified.
    +
    +A long passphrase is recommended, or `rclone config` can generate a
    +random one.
    +
    +The obscured password is created using AES-CTR with a static key. The
    +salt is stored verbatim at the beginning of the obscured password. This
    +static key is shared between all versions of rclone.
    +
    +If you reconfigure rclone with the same passwords/passphrases
    +elsewhere it will be compatible, but the obscured version will be different
    +due to the different salt.
    +
    +Rclone does not encrypt
    +
    +  * file length - this can be calculated within 16 bytes
    +  * modification time - used for syncing
    +
    +### Specifying the remote
    +
    +When configuring the remote to encrypt/decrypt, you may specify any
    +string that rclone accepts as a source/destination of other commands.
    +
    +The primary use case is to specify the path into an already configured
    +remote (e.g. `remote:path/to/dir` or `remote:bucket`), such that
    +data in a remote untrusted location can be stored encrypted.
    +
    +You may also specify a local filesystem path, such as
    +`/path/to/dir` on Linux, `C:\path\to\dir` on Windows. By creating
    +a crypt remote pointing to such a local filesystem path, you can
    +use rclone as a utility for pure local file encryption, for example
    +to keep encrypted files on a removable USB drive.
    +
    +**Note**: A string which do not contain a `:` will by rclone be treated
    +as a relative path in the local filesystem. For example, if you enter
    +the name `remote` without the trailing `:`, it will be treated as
    +a subdirectory of the current directory with name "remote".
    +
    +If a path `remote:path/to/dir` is specified, rclone stores encrypted
    +files in `path/to/dir` on the remote. With file name encryption, files
    +saved to `secret:subdir/subfile` are stored in the unencrypted path
    +`path/to/dir` but the `subdir/subpath` element is encrypted.
    +
    +The path you specify does not have to exist, rclone will create
    +it when needed.
    +
    +If you intend to use the wrapped remote both directly for keeping
    +unencrypted content, as well as through a crypt remote for encrypted
    +content, it is recommended to point the crypt remote to a separate
    +directory within the wrapped remote. If you use a bucket-based storage
    +system (e.g. Swift, S3, Google Compute Storage, B2) it is generally
    +advisable to wrap the crypt remote around a specific bucket (`s3:bucket`).
    +If wrapping around the entire root of the storage (`s3:`), and use the
    +optional file name encryption, rclone will encrypt the bucket name.
    +
    +### Changing password
    +
    +Should the password, or the configuration file containing a lightly obscured
    +form of the password, be compromised, you need to re-encrypt your data with
    +a new password. Since rclone uses secret-key encryption, where the encryption
    +key is generated directly from the password kept on the client, it is not
    +possible to change the password/key of already encrypted content. Just changing
    +the password configured for an existing crypt remote means you will no longer
    +able to decrypt any of the previously encrypted content. The only possibility
    +is to re-upload everything via a crypt remote configured with your new password.
    +
    +Depending on the size of your data, your bandwidth, storage quota etc, there are
    +different approaches you can take:
    +- If you have everything in a different location, for example on your local system,
    +you could remove all of the prior encrypted files, change the password for your
    +configured crypt remote (or delete and re-create the crypt configuration),
    +and then re-upload everything from the alternative location.
    +- If you have enough space on the storage system you can create a new crypt
    +remote pointing to a separate directory on the same backend, and then use
    +rclone to copy everything from the original crypt remote to the new,
    +effectively decrypting everything on the fly using the old password and
    +re-encrypting using the new password. When done, delete the original crypt
    +remote directory and finally the rclone crypt configuration with the old password.
    +All data will be streamed from the storage system and back, so you will
    +get half the bandwidth and be charged twice if you have upload and download quota
    +on the storage system.
    +
    +**Note**: A security problem related to the random password generator
    +was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated
    +by rclone config in version 1.49.0 (released 2019-08-26) to 1.53.2
    +(released 2020-10-26) are not considered secure and should be changed.
    +If you made up your own password, or used rclone version older than 1.49.0 or
    +newer than 1.53.2 to generate it, you are *not* affected by this issue.
    +See [issue #4783](https://github.com/rclone/rclone/issues/4783) for more
    +details, and a tool you can use to check if you are affected.
    +
    +### Example
    +
    +Create the following file structure using "standard" file name
    +encryption.
    +
    +

    plaintext/ ├── file0.txt ├── file1.txt └── subdir ├── file2.txt ├── file3.txt └── subsubdir └── file4.txt

    +
    
    +Copy these to the remote, and list them
    +
    +

    $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 subdir/file3.txt

    +
    
    +The crypt remote looks like
    +
    +

    $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps

    +
    
    +The directory structure is preserved
    +
    +

    $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 subsubdir/file4.txt

    +
    
    +Without file name encryption `.bin` extensions are added to underlying
    +names. This prevents the cloud provider attempting to interpret file
    +content.
    +
    +

    $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin

    +
    
    +### File name encryption modes
    +
    +Off
    +
    +  * doesn't hide file names or directory structure
    +  * allows for longer file names (~246 characters)
    +  * can use sub paths and copy single files
    +
    +Standard
    +
    +  * file names encrypted
    +  * file names can't be as long (~143 characters)
    +  * can use sub paths and copy single files
    +  * directory structure visible
    +  * identical files names will have identical uploaded names
    +  * can use shortcuts to shorten the directory recursion
    +
    +Obfuscation
    +
    +This is a simple "rotate" of the filename, with each file having a rot
    +distance based on the filename. Rclone stores the distance at the
    +beginning of the filename. A file called "hello" may become "53.jgnnq".
    +
    +Obfuscation is not a strong encryption of filenames, but hinders
    +automated scanning tools picking up on filename patterns. It is an
    +intermediate between "off" and "standard" which allows for longer path
    +segment names.
    +
    +There is a possibility with some unicode based filenames that the
    +obfuscation is weak and may map lower case characters to upper case
    +equivalents.
    +
    +Obfuscation cannot be relied upon for strong protection.
    +
    +  * file names very lightly obfuscated
    +  * file names can be longer than standard encryption
    +  * can use sub paths and copy single files
    +  * directory structure visible
    +  * identical files names will have identical uploaded names
    +
    +Cloud storage systems have limits on file name length and
    +total path length which rclone is more likely to breach using
    +"Standard" file name encryption.  Where file names are less than 156
    +characters in length issues should not be encountered, irrespective of
    +cloud storage provider.
    +
    +An experimental advanced option `filename_encoding` is now provided to
    +address this problem to a certain degree.
    +For cloud storage systems with case sensitive file names (e.g. Google Drive),
    +`base64` can be used to reduce file name length. 
    +For cloud storage systems using UTF-16 to store file names internally
    +(e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce
    +file name length. 
    +
    +An alternative, future rclone file name encryption mode may tolerate
    +backend provider path length limits.
    +
    +### Directory name encryption
    +
    +Crypt offers the option of encrypting dir names or leaving them intact.
    +There are two options:
    +
    +True
    +
    +Encrypts the whole file path including directory names
    +Example:
    +`1/12/123.txt` is encrypted to
    +`p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0`
    +
    +False
    +
    +Only encrypts file names, skips directory names
    +Example:
    +`1/12/123.txt` is encrypted to
    +`1/12/qgm4avr35m5loi1th53ato71v0`
    +
    +
    +### Modified time and hashes
    +
    +Crypt stores modification times using the underlying remote so support
    +depends on that.
    +
    +Hashes are not stored for crypt. However the data integrity is
    +protected by an extremely strong crypto authenticator.
    +
    +Use the `rclone cryptcheck` command to check the
    +integrity of an encrypted remote instead of `rclone check` which can't
    +check the checksums properly.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).
    +
    +#### --crypt-remote
     
     Remote to encrypt/decrypt.
    -Normally should contain a ':' and a path, eg "myremote:path/to/dir",
    +
    +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
     "myremote:bucket" or maybe "myremote:" (not recommended).
    -Enter a string value. Press Enter for the default ("").
    -remote> remote:path
    +
    +Properties:
    +
    +- Config:      remote
    +- Env Var:     RCLONE_CRYPT_REMOTE
    +- Type:        string
    +- Required:    true
    +
    +#### --crypt-filename-encryption
    +
     How to encrypt the filenames.
    -Enter a string value. Press Enter for the default ("standard").
    -Choose a number from below, or type in your own value.
    -   / Encrypt the filenames.
    - 1 | See the docs for the details.
    -   \ "standard"
    - 2 / Very simple filename obfuscation.
    -   \ "obfuscate"
    -   / Don't encrypt the file names.
    - 3 | Adds a ".bin" extension only.
    -   \ "off"
    -filename_encryption>
    +
    +Properties:
    +
    +- Config:      filename_encryption
    +- Env Var:     RCLONE_CRYPT_FILENAME_ENCRYPTION
    +- Type:        string
    +- Default:     "standard"
    +- Examples:
    +    - "standard"
    +        - Encrypt the filenames.
    +        - See the docs for the details.
    +    - "obfuscate"
    +        - Very simple filename obfuscation.
    +    - "off"
    +        - Don't encrypt the file names.
    +        - Adds a ".bin", or "suffix" extension only.
    +
    +#### --crypt-directory-name-encryption
    +
     Option to either encrypt directory names or leave them intact.
     
     NB If filename_encryption is "off" then this option will do nothing.
    -Enter a boolean value (true or false). Press Enter for the default ("true").
    -Choose a number from below, or type in your own value
    - 1 / Encrypt directory names.
    -   \ "true"
    - 2 / Don't encrypt directory names, leave them intact.
    -   \ "false"
    -directory_name_encryption>
    +
    +Properties:
    +
    +- Config:      directory_name_encryption
    +- Env Var:     RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
    +- Type:        bool
    +- Default:     true
    +- Examples:
    +    - "true"
    +        - Encrypt directory names.
    +    - "false"
    +        - Don't encrypt directory names, leave them intact.
    +
    +#### --crypt-password
    +
     Password or pass phrase for encryption.
    -y) Yes type in my own password
    -g) Generate random password
    -y/g> y
    -Enter the password:
    -password:
    -Confirm the password:
    -password:
    -Password or pass phrase for salt. Optional but recommended.
    +
    +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    +
    +Properties:
    +
    +- Config:      password
    +- Env Var:     RCLONE_CRYPT_PASSWORD
    +- Type:        string
    +- Required:    true
    +
    +#### --crypt-password2
    +
    +Password or pass phrase for salt.
    +
    +Optional but recommended.
     Should be different to the previous password.
    -y) Yes type in my own password
    -g) Generate random password
    -n) No leave this optional password blank (default)
    -y/g/n> g
    -Password strength in bits.
    -64 is just about memorable
    -128 is secure
    -1024 is the maximum
    -Bits> 128
    -Your password is: JAsJvRcgR-_veXNfy_sGmQ
    -Use this password? Please note that an obscured version of this
    -password (and not the password itself) will be stored under your
    -configuration file, so keep this generated password in a safe place.
    -y) Yes (default)
    -n) No
    -y/n>
    -Edit advanced config? (y/n)
    -y) Yes
    -n) No (default)
    -y/n>
    -Remote config
    ---------------------
    -[secret]
    -type = crypt
    -remote = remote:path
    -password = *** ENCRYPTED ***
    -password2 = *** ENCRYPTED ***
    ---------------------
    -y) Yes this is OK (default)
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d>
    -

    Important The crypt password stored in rclone.conf is lightly obscured. That only protects it from cursory inspection. It is not secure unless configuration encryption of rclone.conf is specified.

    -

    A long passphrase is recommended, or rclone config can generate a random one.

    -

    The obscured password is created using AES-CTR with a static key. The salt is stored verbatim at the beginning of the obscured password. This static key is shared between all versions of rclone.

    -

    If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt.

    -

    Rclone does not encrypt

    - -

    Specifying the remote

    -

    When configuring the remote to encrypt/decrypt, you may specify any string that rclone accepts as a source/destination of other commands.

    -

    The primary use case is to specify the path into an already configured remote (e.g. remote:path/to/dir or remote:bucket), such that data in a remote untrusted location can be stored encrypted.

    -

    You may also specify a local filesystem path, such as /path/to/dir on Linux, C:\path\to\dir on Windows. By creating a crypt remote pointing to such a local filesystem path, you can use rclone as a utility for pure local file encryption, for example to keep encrypted files on a removable USB drive.

    -

    Note: A string which do not contain a : will by rclone be treated as a relative path in the local filesystem. For example, if you enter the name remote without the trailing :, it will be treated as a subdirectory of the current directory with name "remote".

    -

    If a path remote:path/to/dir is specified, rclone stores encrypted files in path/to/dir on the remote. With file name encryption, files saved to secret:subdir/subfile are stored in the unencrypted path path/to/dir but the subdir/subpath element is encrypted.

    -

    The path you specify does not have to exist, rclone will create it when needed.

    -

    If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory within the wrapped remote. If you use a bucket-based storage system (e.g. Swift, S3, Google Compute Storage, B2) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket). If wrapping around the entire root of the storage (s3:), and use the optional file name encryption, rclone will encrypt the bucket name.

    -

    Changing password

    -

    Should the password, or the configuration file containing a lightly obscured form of the password, be compromised, you need to re-encrypt your data with a new password. Since rclone uses secret-key encryption, where the encryption key is generated directly from the password kept on the client, it is not possible to change the password/key of already encrypted content. Just changing the password configured for an existing crypt remote means you will no longer able to decrypt any of the previously encrypted content. The only possibility is to re-upload everything via a crypt remote configured with your new password.

    -

    Depending on the size of your data, your bandwidth, storage quota etc, there are different approaches you can take: - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), and then re-upload everything from the alternative location. - If you have enough space on the storage system you can create a new crypt remote pointing to a separate directory on the same backend, and then use rclone to copy everything from the original crypt remote to the new, effectively decrypting everything on the fly using the old password and re-encrypting using the new password. When done, delete the original crypt remote directory and finally the rclone crypt configuration with the old password. All data will be streamed from the storage system and back, so you will get half the bandwidth and be charged twice if you have upload and download quota on the storage system.

    -

    Note: A security problem related to the random password generator was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated by rclone config in version 1.49.0 (released 2019-08-26) to 1.53.2 (released 2020-10-26) are not considered secure and should be changed. If you made up your own password, or used rclone version older than 1.49.0 or newer than 1.53.2 to generate it, you are not affected by this issue. See issue #4783 for more details, and a tool you can use to check if you are affected.

    -

    Example

    -

    Create the following file structure using "standard" file name encryption.

    -
    plaintext/
    -├── file0.txt
    -├── file1.txt
    -└── subdir
    -    ├── file2.txt
    -    ├── file3.txt
    -    └── subsubdir
    -        └── file4.txt
    -

    Copy these to the remote, and list them

    -
    $ rclone -q copy plaintext secret:
    -$ rclone -q ls secret:
    -        7 file1.txt
    -        6 file0.txt
    -        8 subdir/file2.txt
    -       10 subdir/subsubdir/file4.txt
    -        9 subdir/file3.txt
    -

    The crypt remote looks like

    -
    $ rclone -q ls remote:path
    -       55 hagjclgavj2mbiqm6u6cnjjqcg
    -       54 v05749mltvv1tf4onltun46gls
    -       57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
    -       58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
    -       56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
    -

    The directory structure is preserved

    -
    $ rclone -q ls secret:subdir
    -        8 file2.txt
    -        9 file3.txt
    -       10 subsubdir/file4.txt
    -

    Without file name encryption .bin extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content.

    -
    $ rclone -q ls remote:path
    -       54 file0.txt.bin
    -       57 subdir/file3.txt.bin
    -       56 subdir/file2.txt.bin
    -       58 subdir/subsubdir/file4.txt.bin
    -       55 file1.txt.bin
    -

    File name encryption modes

    -

    Off

    - -

    Standard

    - -

    Obfuscation

    -

    This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. Rclone stores the distance at the beginning of the filename. A file called "hello" may become "53.jgnnq".

    -

    Obfuscation is not a strong encryption of filenames, but hinders automated scanning tools picking up on filename patterns. It is an intermediate between "off" and "standard" which allows for longer path segment names.

    -

    There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents.

    -

    Obfuscation cannot be relied upon for strong protection.

    - -

    Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less than 156 characters in length issues should not be encountered, irrespective of cloud storage provider.

    -

    An experimental advanced option filename_encoding is now provided to address this problem to a certain degree. For cloud storage systems with case sensitive file names (e.g. Google Drive), base64 can be used to reduce file name length. For cloud storage systems using UTF-16 to store file names internally (e.g. OneDrive, Dropbox), base32768 can be used to drastically reduce file name length.

    -

    An alternative, future rclone file name encryption mode may tolerate backend provider path length limits.

    -

    Directory name encryption

    -

    Crypt offers the option of encrypting dir names or leaving them intact. There are two options:

    -

    True

    -

    Encrypts the whole file path including directory names Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0

    -

    False

    -

    Only encrypts file names, skips directory names Example: 1/12/123.txt is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0

    -

    Modified time and hashes

    -

    Crypt stores modification times using the underlying remote so support depends on that.

    -

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    -

    Use the rclone cryptcheck command to check the integrity of an encrypted remote instead of rclone check which can't check the checksums properly.

    -

    Standard options

    -

    Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).

    -

    --crypt-remote

    -

    Remote to encrypt/decrypt.

    -

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    -

    Properties:

    - -

    --crypt-filename-encryption

    -

    How to encrypt the filenames.

    -

    Properties:

    - -

    --crypt-directory-name-encryption

    -

    Option to either encrypt directory names or leave them intact.

    -

    NB If filename_encryption is "off" then this option will do nothing.

    -

    Properties:

    - -

    --crypt-password

    -

    Password or pass phrase for encryption.

    -

    NB Input to this must be obscured - see rclone obscure.

    -

    Properties:

    - -

    --crypt-password2

    -

    Password or pass phrase for salt.

    -

    Optional but recommended. Should be different to the previous password.

    -

    NB Input to this must be obscured - see rclone obscure.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).

    -

    --crypt-server-side-across-configs

    -

    Deprecated: use --server-side-across-configs instead.

    -

    Allow server-side operations (e.g. copy) to work across different crypt configs.

    -

    Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it.

    -

    This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes.

    -

    Properties:

    - -

    --crypt-show-mapping

    -

    For all files listed show how the names encrypt.

    -

    If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.

    -

    This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.

    -

    Properties:

    - -

    --crypt-no-data-encryption

    -

    Option to either encrypt file data or leave it unencrypted.

    -

    Properties:

    - -

    --crypt-pass-bad-blocks

    -

    If set this will pass bad blocks through as all 0.

    -

    This should not be set in normal operation, it should only be set if trying to recover an encrypted file with errors and it is desired to recover as much of the file as possible.

    -

    Properties:

    - -

    --crypt-filename-encoding

    -

    How to encode the encrypted filename to text string.

    -

    This option could help with shortening the encrypted filename. The suitable option would depend on the way your remote count the filename length and if it's case sensitive.

    -

    Properties:

    - -

    --crypt-suffix

    -

    If this is set it will override the default suffix of ".bin".

    -

    Setting suffix to "none" will result in an empty suffix. This may be useful when the path length is critical.

    -

    Properties:

    - -

    Metadata

    -

    Any metadata supported by the underlying remote is read and written.

    -

    See the metadata docs for more info.

    -

    Backend commands

    -

    Here are the commands specific to the crypt backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    -

    The help below will explain what arguments each command takes.

    -

    See the backend command for more info on how to pass options and arguments.

    -

    These can be run on a running backend using the rc command backend/command.

    -

    encode

    -

    Encode the given filename(s)

    -
    rclone backend encode remote: [options] [<arguments>+]
    -

    This encodes the filenames given as arguments returning a list of strings of the encoded results.

    -

    Usage Example:

    -
    rclone backend encode crypt: file1 [file2...]
    -rclone rc backend/command command=encode fs=crypt: file1 [file2...]
    -

    decode

    -

    Decode the given filename(s)

    -
    rclone backend decode remote: [options] [<arguments>+]
    -

    This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid.

    -

    Usage Example:

    -
    rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
    -rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
    -

    Backing up an encrypted remote

    -

    If you wish to backup an encrypted remote, it is recommended that you use rclone sync on the encrypted files, and make sure the passwords are the same in the new encrypted remote.

    -

    This will have the following advantages

    - -

    For example, let's say you have your original remote at remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.

    -

    To sync the two remotes you would do

    -
    rclone sync --interactive remote:crypt remote2:crypt
    -

    And to check the integrity you would do

    -
    rclone check remote:crypt remote2:crypt
    -

    File formats

    -

    File encryption

    -

    Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.

    -

    Header

    - -

    The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.

    -

    Chunk

    -

    Each chunk will contain 64 KiB of data, except for the last one which may have less data. The data chunk is in standard NaCl SecretBox format. SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.

    -

    Each chunk contains:

    - -

    64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.

    -

    This uses a 32 byte (256 bit key) key derived from the user password.

    -

    Examples

    -

    1 byte file will encrypt to

    - -

    49 bytes total

    -

    1 MiB (1048576 bytes) file will encrypt to

    - -

    1049120 bytes total (a 0.05% overhead). This is the overhead for big files.

    -

    Name encryption

    -

    File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.

    -

    File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption.

    -

    They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.

    -

    This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.

    -

    This means that

    - -

    This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.

    -

    After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:

    - -

    base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (e.g. Windows, Amazon Drive).

    -

    Key derivation

    -

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

    -

    scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.

    -

    SEE ALSO

    - -

    Compress

    -

    Warning

    -

    This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications.

    -

    The Compress remote adds compression to another remote. It is best used with remotes containing many large compressible files.

    -

    Configuration

    -

    To use this remote, all you need to do is specify another remote and a compression mode to use:

    -
    Current remotes:
     
    -Name                 Type
    -====                 ====
    -remote_to_press      sometype
    +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
     
    -e) Edit existing remote
    -$ rclone config
    -n) New remote
    -d) Delete remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -e/n/d/r/c/s/q> n
    -name> compress
    -...
    - 8 / Compress a remote
    -   \ "compress"
    -...
    -Storage> compress
    -** See help for compress backend at: https://rclone.org/compress/ **
    +Properties:
    +
    +- Config:      password2
    +- Env Var:     RCLONE_CRYPT_PASSWORD2
    +- Type:        string
    +- Required:    false
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).
    +
    +#### --crypt-server-side-across-configs
    +
    +Deprecated: use --server-side-across-configs instead.
    +
    +Allow server-side operations (e.g. copy) to work across different crypt configs.
    +
    +Normally this option is not what you want, but if you have two crypts
    +pointing to the same backend you can use it.
    +
    +This can be used, for example, to change file name encryption type
    +without re-uploading all the data. Just make two crypt backends
    +pointing to two different directories with the single changed
    +parameter and use rclone move to move the files between the crypt
    +remotes.
    +
    +Properties:
    +
    +- Config:      server_side_across_configs
    +- Env Var:     RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS
    +- Type:        bool
    +- Default:     false
    +
    +#### --crypt-show-mapping
    +
    +For all files listed show how the names encrypt.
    +
    +If this flag is set then for each file that the remote is asked to
    +list, it will log (at level INFO) a line stating the decrypted file
    +name and the encrypted file name.
    +
    +This is so you can work out which encrypted names are which decrypted
    +names just in case you need to do something with the encrypted file
    +names, or for debugging purposes.
    +
    +Properties:
    +
    +- Config:      show_mapping
    +- Env Var:     RCLONE_CRYPT_SHOW_MAPPING
    +- Type:        bool
    +- Default:     false
    +
    +#### --crypt-no-data-encryption
    +
    +Option to either encrypt file data or leave it unencrypted.
    +
    +Properties:
    +
    +- Config:      no_data_encryption
    +- Env Var:     RCLONE_CRYPT_NO_DATA_ENCRYPTION
    +- Type:        bool
    +- Default:     false
    +- Examples:
    +    - "true"
    +        - Don't encrypt file data, leave it unencrypted.
    +    - "false"
    +        - Encrypt file data.
    +
    +#### --crypt-pass-bad-blocks
    +
    +If set this will pass bad blocks through as all 0.
    +
    +This should not be set in normal operation, it should only be set if
    +trying to recover an encrypted file with errors and it is desired to
    +recover as much of the file as possible.
    +
    +Properties:
    +
    +- Config:      pass_bad_blocks
    +- Env Var:     RCLONE_CRYPT_PASS_BAD_BLOCKS
    +- Type:        bool
    +- Default:     false
    +
    +#### --crypt-filename-encoding
    +
    +How to encode the encrypted filename to text string.
    +
    +This option could help with shortening the encrypted filename. The 
    +suitable option would depend on the way your remote count the filename
    +length and if it's case sensitive.
    +
    +Properties:
    +
    +- Config:      filename_encoding
    +- Env Var:     RCLONE_CRYPT_FILENAME_ENCODING
    +- Type:        string
    +- Default:     "base32"
    +- Examples:
    +    - "base32"
    +        - Encode using base32. Suitable for all remote.
    +    - "base64"
    +        - Encode using base64. Suitable for case sensitive remote.
    +    - "base32768"
    +        - Encode using base32768. Suitable if your remote counts UTF-16 or
    +        - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox)
    +
    +#### --crypt-suffix
    +
    +If this is set it will override the default suffix of ".bin".
    +
    +Setting suffix to "none" will result in an empty suffix. This may be useful 
    +when the path length is critical.
    +
    +Properties:
    +
    +- Config:      suffix
    +- Env Var:     RCLONE_CRYPT_SUFFIX
    +- Type:        string
    +- Default:     ".bin"
    +
    +### Metadata
    +
    +Any metadata supported by the underlying remote is read and written.
    +
    +See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
    +
    +## Backend commands
    +
    +Here are the commands specific to the crypt backend.
    +
    +Run them with
    +
    +    rclone backend COMMAND remote:
    +
    +The help below will explain what arguments each command takes.
    +
    +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
    +info on how to pass options and arguments.
    +
    +These can be run on a running backend using the rc command
    +[backend/command](https://rclone.org/rc/#backend-command).
    +
    +### encode
    +
    +Encode the given filename(s)
    +
    +    rclone backend encode remote: [options] [<arguments>+]
    +
    +This encodes the filenames given as arguments returning a list of
    +strings of the encoded results.
    +
    +Usage Example:
    +
    +    rclone backend encode crypt: file1 [file2...]
    +    rclone rc backend/command command=encode fs=crypt: file1 [file2...]
    +
    +
    +### decode
    +
    +Decode the given filename(s)
    +
    +    rclone backend decode remote: [options] [<arguments>+]
    +
    +This decodes the filenames given as arguments returning a list of
    +strings of the decoded results. It will return an error if any of the
    +inputs are invalid.
    +
    +Usage Example:
    +
    +    rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
    +    rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
    +
    +
    +
    +
    +## Backing up an encrypted remote
    +
    +If you wish to backup an encrypted remote, it is recommended that you use
    +`rclone sync` on the encrypted files, and make sure the passwords are
    +the same in the new encrypted remote.
    +
    +This will have the following advantages
    +
    +  * `rclone sync` will check the checksums while copying
    +  * you can use `rclone check` between the encrypted remotes
    +  * you don't decrypt and encrypt unnecessarily
    +
    +For example, let's say you have your original remote at `remote:` with
    +the encrypted version at `eremote:` with path `remote:crypt`.  You
    +would then set up the new remote `remote2:` and then the encrypted
    +version `eremote2:` with path `remote2:crypt` using the same passwords
    +as `eremote:`.
    +
    +To sync the two remotes you would do
    +
    +    rclone sync --interactive remote:crypt remote2:crypt
    +
    +And to check the integrity you would do
    +
    +    rclone check remote:crypt remote2:crypt
    +
    +## File formats
    +
    +### File encryption
    +
    +Files are encrypted 1:1 source file to destination object.  The file
    +has a header and is divided into chunks.
    +
    +#### Header
    +
    +  * 8 bytes magic string `RCLONE\x00\x00`
    +  * 24 bytes Nonce (IV)
    +
    +The initial nonce is generated from the operating systems crypto
    +strong random number generator.  The nonce is incremented for each
    +chunk read making sure each nonce is unique for each block written.
    +The chance of a nonce being re-used is minuscule.  If you wrote an
    +exabyte of data (10¹⁸ bytes) you would have a probability of
    +approximately 2×10⁻³² of re-using a nonce.
    +
    +#### Chunk
    +
    +Each chunk will contain 64 KiB of data, except for the last one which
    +may have less data. The data chunk is in standard NaCl SecretBox
    +format. SecretBox uses XSalsa20 and Poly1305 to encrypt and
    +authenticate messages.
    +
    +Each chunk contains:
    +
    +  * 16 Bytes of Poly1305 authenticator
    +  * 1 - 65536 bytes XSalsa20 encrypted data
    +
    +64k chunk size was chosen as the best performing chunk size (the
    +authenticator takes too much time below this and the performance drops
    +off due to cache effects above this).  Note that these chunks are
    +buffered in memory so they can't be too big.
    +
    +This uses a 32 byte (256 bit key) key derived from the user password.
    +
    +#### Examples
    +
    +1 byte file will encrypt to
    +
    +  * 32 bytes header
    +  * 17 bytes data chunk
    +
    +49 bytes total
    +
    +1 MiB (1048576 bytes) file will encrypt to
    +
    +  * 32 bytes header
    +  * 16 chunks of 65568 bytes
    +
    +1049120 bytes total (a 0.05% overhead). This is the overhead for big
    +files.
    +
    +### Name encryption
    +
    +File names are encrypted segment by segment - the path is broken up
    +into `/` separated strings and these are encrypted individually.
    +
    +File segments are padded using PKCS#7 to a multiple of 16 bytes
    +before encryption.
    +
    +They are then encrypted with EME using AES with 256 bit key. EME
    +(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
    +paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
    +
    +This makes for deterministic encryption which is what we want - the
    +same filename must encrypt to the same thing otherwise we can't find
    +it on the cloud storage system.
    +
    +This means that
    +
    +  * filenames with the same name will encrypt the same
    +  * filenames which start the same won't have a common prefix
    +
    +This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
    +which are derived from the user password.
    +
    +After encryption they are written out using a modified version of
    +standard `base32` encoding as described in RFC4648.  The standard
    +encoding is modified in two ways:
    +
    +  * it becomes lower case (no-one likes upper case filenames!)
    +  * we strip the padding character `=`
    +
    +`base32` is used rather than the more efficient `base64` so rclone can be
    +used on case insensitive remotes (e.g. Windows, Amazon Drive).
    +
    +### Key derivation
    +
    +Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with an
    +optional user supplied salt (password2) to derive the 32+32+16 = 80
    +bytes of key material required.  If the user doesn't supply a salt
    +then rclone uses an internal one.
    +
    +`scrypt` makes it impractical to mount a dictionary attack on rclone
    +encrypted data.  For full protection against this you should always use
    +a salt.
    +
    +## SEE ALSO
    +
    +* [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/)    - Show forward/reverse mapping of encrypted filenames
    +
    +#  Compress
    +
    +## Warning
    +
    +This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
    +at your own risk. Please understand the risks associated with using experimental code and don't use this remote in
    +critical applications.
    +
    +The `Compress` remote adds compression to another remote. It is best used with remotes containing
    +many large compressible files.
    +
    +## Configuration
    +
    +To use this remote, all you need to do is specify another remote and a compression mode to use:
    +
    +

    Current remotes:

    +

    Name Type ==== ==== remote_to_press sometype

    +
      +
    1. Edit existing remote $ rclone config
    2. +
    3. New remote
    4. +
    5. Delete remote
    6. +
    7. Rename remote
    8. +
    9. Copy remote
    10. +
    11. Set configuration password
    12. +
    13. Quit config e/n/d/r/c/s/q> n name> compress ... 8 / Compress a remote  "compress" ... Storage> compress ** See help for compress backend at: https://rclone.org/compress/ **
    14. +
    +

    Remote to compress. Enter a string value. Press Enter for the default (""). remote> remote_to_press:subdir Compression mode. Enter a string value. Press Enter for the default ("gzip"). Choose a number from below, or type in your own value 1 / Gzip compression balanced for speed and compression strength.  "gzip" compression_mode> gzip Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [compress] type = compress remote = remote_to_press:subdir compression_mode = gzip -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +### Compression Modes
    +
    +Currently only gzip compression is supported. It provides a decent balance between speed and size and is well
    +supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no
    +compression and 9 is strongest compression.
    +
    +### File types
    +
    +If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to
    +the compression algorithm you chose. These files are standard files that can be opened by various archive programs, 
    +but they have some hidden metadata that allows them to be used by rclone.
    +While you may download and decompress these files at will, do **not** manually delete or rename files. Files without
    +correct metadata files will not be recognized by rclone.
    +
    +### File names
    +
    +The compressed files will be named `*.###########.gz` where `*` is the base file and the `#` part is base64 encoded 
    +size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to compress (Compress a remote).
    +
    +#### --compress-remote
     
     Remote to compress.
    -Enter a string value. Press Enter for the default ("").
    -remote> remote_to_press:subdir 
    +
    +Properties:
    +
    +- Config:      remote
    +- Env Var:     RCLONE_COMPRESS_REMOTE
    +- Type:        string
    +- Required:    true
    +
    +#### --compress-mode
    +
     Compression mode.
    -Enter a string value. Press Enter for the default ("gzip").
    -Choose a number from below, or type in your own value
    - 1 / Gzip compression balanced for speed and compression strength.
    -   \ "gzip"
    -compression_mode> gzip
    -Edit advanced config? (y/n)
    -y) Yes
    -n) No (default)
    -y/n> n
    -Remote config
    ---------------------
    -[compress]
    -type = compress
    -remote = remote_to_press:subdir
    -compression_mode = gzip
    ---------------------
    -y) Yes this is OK (default)
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    Compression Modes

    -

    Currently only gzip compression is supported. It provides a decent balance between speed and size and is well supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no compression and 9 is strongest compression.

    -

    File types

    -

    If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone.

    -

    File names

    -

    The compressed files will be named *.###########.gz where * is the base file and the # part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.

    -

    Standard options

    -

    Here are the Standard options specific to compress (Compress a remote).

    -

    --compress-remote

    -

    Remote to compress.

    -

    Properties:

    - -

    --compress-mode

    -

    Compression mode.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to compress (Compress a remote).

    -

    --compress-level

    -

    GZIP compression level (-2 to 9).

    -

    Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.

    -

    Level -2 uses Huffman encoding only. Only use if you know what you are doing. Level 0 turns off compression.

    -

    Properties:

    - -

    --compress-ram-cache-limit

    -

    Some remotes don't allow the upload of files with unknown size. In this case the compressed file will need to be cached to determine it's size.

    -

    Files smaller than this limit will be cached in RAM, files larger than this limit will be cached on disk.

    -

    Properties:

    - -

    Metadata

    -

    Any metadata supported by the underlying remote is read and written.

    -

    See the metadata docs for more info.

    -

    Combine

    -

    The combine backend joins remotes together into a single directory tree.

    -

    For example you might have a remote for images on one provider:

    -
    $ rclone tree s3:imagesbucket
    -/
    -├── image1.jpg
    -└── image2.jpg
    -

    And a remote for files on another:

    -
    $ rclone tree drive:important/files
    -/
    -├── file1.txt
    -└── file2.txt
    -

    The combine backend can join these together into a synthetic directory structure like this:

    -
    $ rclone tree combined:
    -/
    -├── files
    -│   ├── file1.txt
    -│   └── file2.txt
    -└── images
    -    ├── image1.jpg
    -    └── image2.jpg
    -

    You'd do this by specifying an upstreams parameter in the config like this

    -
    upstreams = images=s3:imagesbucket files=drive:important/files
    -

    During the initial setup with rclone config you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

    -

    Configuration

    -

    Here is an example of how to make a combine called remote for the example above. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> remote
    -Option Storage.
    -Type of storage to configure.
    -Choose a number from below, or type in your own value.
    -...
    -XX / Combine several remotes into one
    -   \ (combine)
    -...
    -Storage> combine
    -Option upstreams.
    +
    +Properties:
    +
    +- Config:      mode
    +- Env Var:     RCLONE_COMPRESS_MODE
    +- Type:        string
    +- Default:     "gzip"
    +- Examples:
    +    - "gzip"
    +        - Standard gzip compression with fastest parameters.
    +
    +### Advanced options
    +
    +Here are the Advanced options specific to compress (Compress a remote).
    +
    +#### --compress-level
    +
    +GZIP compression level (-2 to 9).
    +
    +Generally -1 (default, equivalent to 5) is recommended.
    +Levels 1 to 9 increase compression at the cost of speed. Going past 6 
    +generally offers very little return.
    +
    +Level -2 uses Huffman encoding only. Only use if you know what you
    +are doing.
    +Level 0 turns off compression.
    +
    +Properties:
    +
    +- Config:      level
    +- Env Var:     RCLONE_COMPRESS_LEVEL
    +- Type:        int
    +- Default:     -1
    +
    +#### --compress-ram-cache-limit
    +
    +Some remotes don't allow the upload of files with unknown size.
    +In this case the compressed file will need to be cached to determine
    +it's size.
    +
    +Files smaller than this limit will be cached in RAM, files larger than 
    +this limit will be cached on disk.
    +
    +Properties:
    +
    +- Config:      ram_cache_limit
    +- Env Var:     RCLONE_COMPRESS_RAM_CACHE_LIMIT
    +- Type:        SizeSuffix
    +- Default:     20Mi
    +
    +### Metadata
    +
    +Any metadata supported by the underlying remote is read and written.
    +
    +See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
    +
    +
    +
    +#  Combine
    +
    +The `combine` backend joins remotes together into a single directory
    +tree.
    +
    +For example you might have a remote for images on one provider:
    +
    +

    $ rclone tree s3:imagesbucket / ├── image1.jpg └── image2.jpg

    +
    
    +And a remote for files on another:
    +
    +

    $ rclone tree drive:important/files / ├── file1.txt └── file2.txt

    +
    
    +The `combine` backend can join these together into a synthetic
    +directory structure like this:
    +
    +

    $ rclone tree combined: / ├── files │ ├── file1.txt │ └── file2.txt └── images ├── image1.jpg └── image2.jpg

    +
    
    +You'd do this by specifying an `upstreams` parameter in the config
    +like this
    +
    +    upstreams = images=s3:imagesbucket files=drive:important/files
    +
    +During the initial setup with `rclone config` you will specify the
    +upstreams remotes as a space separated list. The upstream remotes can
    +either be a local paths or other remotes.
    +
    +## Configuration
    +
    +Here is an example of how to make a combine called `remote` for the
    +example above. First run:
    +
    +     rclone config
    +
    +This will guide you through an interactive setup process:
    +
    +

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... XX / Combine several remotes into one  (combine) ... Storage> combine Option upstreams. Upstreams for combining These should be in the form dir=remote:path dir2=remote2:path Where before the = is specified the root directory and after is the remote to put there. Embedded spaces can be added using quotes "dir=remote:path with space" "dir2=remote2:path with space" Enter a fs.SpaceSepList value. upstreams> images=s3:imagesbucket files=drive:important/files -------------------- [remote] type = combine upstreams = images=s3:imagesbucket files=drive:important/files -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    +
    
    +### Configuring for Google Drive Shared Drives
    +
    +Rclone has a convenience feature for making a combine backend for all
    +the shared drives you have access to.
    +
    +Assuming your main (non shared drive) Google drive remote is called
    +`drive:` you would run
    +
    +    rclone backend -o config drives drive:
    +
    +This would produce something like this:
    +
    +    [My Drive]
    +    type = alias
    +    remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    +
    +    [Test Drive]
    +    type = alias
    +    remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    +
    +    [AllDrives]
    +    type = combine
    +    upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
    +
    +If you then add that config to your config file (find it with `rclone
    +config file`) then you can access all the shared drives in one place
    +with the `AllDrives:` remote.
    +
    +See [the Google Drive docs](https://rclone.org/drive/#drives) for full info.
    +
    +
    +### Standard options
    +
    +Here are the Standard options specific to combine (Combine several remotes into one).
    +
    +#### --combine-upstreams
    +
     Upstreams for combining
    +
     These should be in the form
    +
         dir=remote:path dir2=remote2:path
    +
     Where before the = is specified the root directory and after is the remote to
     put there.
    +
     Embedded spaces can be added using quotes
    +
         "dir=remote:path with space" "dir2=remote2:path with space"
    -Enter a fs.SpaceSepList value.
    -upstreams> images=s3:imagesbucket files=drive:important/files
    ---------------------
    -[remote]
    -type = combine
    -upstreams = images=s3:imagesbucket files=drive:important/files
    ---------------------
    -y) Yes this is OK (default)
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    Configuring for Google Drive Shared Drives

    -

    Rclone has a convenience feature for making a combine backend for all the shared drives you have access to.

    -

    Assuming your main (non shared drive) Google drive remote is called drive: you would run

    -
    rclone backend -o config drives drive:
    -

    This would produce something like this:

    -
    [My Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
     
    -[Test Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
     
    -[AllDrives]
    -type = combine
    -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
    -

    If you then add that config to your config file (find it with rclone config file) then you can access all the shared drives in one place with the AllDrives: remote.

    -

    See the Google Drive docs for full info.

    -

    Standard options

    -

    Here are the Standard options specific to combine (Combine several remotes into one).

    -

    --combine-upstreams

    -

    Upstreams for combining

    -

    These should be in the form

    -
    dir=remote:path dir2=remote2:path
    -

    Where before the = is specified the root directory and after is the remote to put there.

    -

    Embedded spaces can be added using quotes

    -
    "dir=remote:path with space" "dir2=remote2:path with space"
    -

    Properties:

    - -

    Metadata

    -

    Any metadata supported by the underlying remote is read and written.

    -

    See the metadata docs for more info.

    -

    Dropbox

    -

    Paths are specified as remote:path

    -

    Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    -

    The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    n) New remote
    -d) Delete remote
    -q) Quit config
    -e/n/d/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Dropbox
    -   \ "dropbox"
    -[snip]
    -Storage> dropbox
    -Dropbox App Key - leave blank normally.
    -app_key>
    -Dropbox App Secret - leave blank normally.
    -app_secret>
    -Remote config
    -Please visit:
    -https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
    -Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
    ---------------------
    -[remote]
    -app_key =
    -app_secret =
    -token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Dropbox. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    -

    You can then use it like this,

    -

    List directories in top level of your dropbox

    -
    rclone lsd remote:
    -

    List all the files in your dropbox

    -
    rclone ls remote:
    -

    To copy a local directory to a dropbox directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Dropbox for business

    -

    Rclone supports Dropbox for business and Team Folders.

    -

    When using Dropbox for business remote: and remote:path/to/file will refer to your personal folder.

    -

    If you wish to see Team Folders you must use a leading / in the path, so rclone lsd remote:/ will refer to the root and show you all Team Folders and your User Folder.

    -

    You can then use team folders like this remote:/TeamFolder and remote:/TeamFolder/path/to/file.

    -

    A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.

    -

    Modified time and Hashes

    -

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    -

    This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

    -

    Dropbox supports its own hash type which is checked for all transfers.

    -

    Restricted filename characters

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    NUL0x00
    /0x2F
    DEL0x7F
    \0x5C
    -

    File names can also not end with the following characters. These only get replaced if they are the last character in the name:

    - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    SP0x20
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Batch mode uploads

    -

    Using batch mode uploads is very important for performance when using the Dropbox API. See the dropbox performance guide for more info.

    -

    There are 3 modes rclone can use for uploads.

    -

    --dropbox-batch-mode off

    -

    In this mode rclone will not use upload batching. This was the default before rclone v1.55. It has the disadvantage that it is very likely to encounter too_many_requests errors like this

    -
    NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
    -

    When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers.

    -

    This will happen especially if --transfers is large, so this mode isn't recommended except for compatibility or investigating problems.

    -

    --dropbox-batch-mode sync

    -

    In this mode rclone will batch up uploads to the size specified by --dropbox-batch-size and commit them together.

    -

    Using this mode means you can use a much higher --transfers parameter (32 or 64 works fine) without receiving too_many_requests errors.

    -

    This mode ensures full data integrity.

    -

    Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.

    -

    --dropbox-batch-mode async

    -

    In this mode rclone will batch up uploads to the size specified by --dropbox-batch-size and commit them together.

    -

    However it will not wait for the status of the batch to be returned to the caller. This means rclone can use a much bigger batch size (much bigger than --transfers), at the cost of not being able to check the status of the upload.

    -

    This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.

    -

    If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async then do a final transfer with --dropbox-batch-mode sync (the default).

    -

    Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.

    -

    Standard options

    -

    Here are the Standard options specific to dropbox (Dropbox).

    -

    --dropbox-client-id

    -

    OAuth Client Id.

    -

    Leave blank normally.

    -

    Properties:

    - -

    --dropbox-client-secret

    -

    OAuth Client Secret.

    -

    Leave blank normally.

    -

    Properties:

    - -

    Advanced options

    -

    Here are the Advanced options specific to dropbox (Dropbox).

    -

    --dropbox-token

    -

    OAuth Access Token as a JSON blob.

    -

    Properties:

    - -

    --dropbox-auth-url

    -

    Auth server URL.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --dropbox-token-url

    -

    Token server url.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --dropbox-chunk-size

    -

    Upload chunk size (< 150Mi).

    -

    Any files larger than this will be uploaded in chunks of this size.

    -

    Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128 MiB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.

    -

    Properties:

    - -

    --dropbox-impersonate

    -

    Impersonate this user when using a business account.

    -

    Note that if you want to use impersonate, you should make sure this flag is set when running "rclone config" as this will cause rclone to request the "members.read" scope which it won't normally. This is needed to lookup a members email address into the internal ID that dropbox uses in the API.

    -

    Using the "members.read" scope will require a Dropbox Team Admin to approve during the OAuth flow.

    -

    You will have to use your own App (setting your own client_id and client_secret) to use this option as currently rclone's default set of permissions doesn't include "members.read". This can be added once v1.55 or later is in use everywhere.

    -

    Properties:

    - -

    --dropbox-shared-files

    -

    Instructs rclone to work on individual shared files.

    -

    In this mode rclone's features are extremely limited - only list (ls, lsl, etc.) operations and read operations (e.g. downloading) are supported in this mode. All other operations will be disabled.

    -

    Properties:

    - -

    --dropbox-shared-folders

    -

    Instructs rclone to work on shared folders.

    -

    When this flag is used with no path only the List operation is supported and all available shared folders will be listed. If you specify a path the first part will be interpreted as the name of shared folder. Rclone will then try to mount this shared to the root namespace. On success shared folder rclone proceeds normally. The shared folder is now pretty much a normal folder and all normal operations are supported.

    -

    Note that we don't unmount the shared folder afterwards so the --dropbox-shared-folders can be omitted after the first use of a particular shared folder.

    -

    Properties:

    - -

    --dropbox-batch-mode

    -

    Upload file batching sync|async|off.

    -

    This sets the batch mode used by rclone.

    -

    For full info see the main docs

    -

    This has 3 possible values

    - -

    Rclone will close any outstanding batches when it exits which may make a delay on quit.

    -

    Properties:

    - -

    --dropbox-batch-size

    -

    Max number of files in upload batch.

    -

    This sets the batch size of files to upload. It has to be less than 1000.

    -

    By default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode.

    - -

    Rclone will close any outstanding batches when it exits which may make a delay on quit.

    -

    Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker. You can use --transfers 32 to maximise throughput.

    -

    Properties:

    - -

    --dropbox-batch-timeout

    -

    Max time to allow an idle upload batch before uploading.

    -

    If an upload batch is idle for more than this long then it will be uploaded.

    -

    The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use.

    - -

    Properties:

    - -

    --dropbox-batch-commit-timeout

    -

    Max time to wait for a batch to finish committing

    -

    Properties:

    - -

    --dropbox-pacer-min-sleep

    -

    Minimum time to sleep between API calls.

    -

    Properties:

    - -

    --dropbox-encoding

    -

    The encoding for the backend.

    -

    See the encoding section in the overview for more info.

    -

    Properties:

    - -

    Limitations

    -

    Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.

    -

    Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.

    -

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

    -

    When using rclone link you'll need to set --expire if using a non-personal account otherwise the visibility may not be correct. (Note that --expire isn't supported on personal accounts). See the forum discussion and the dropbox SDK issue.

    -

    Get your own Dropbox App ID

    -

    When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.

    -

    Here is how to create your own Dropbox App ID for rclone:

    -
      -
    1. Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access)

    2. -
    3. Choose an API => Usually this should be Dropbox API

    4. -
    5. Choose the type of access you want to use => Full Dropbox or App Folder

    6. -
    7. Name your App. The app name is global, so you can't use rclone for example

    8. -
    9. Click the button Create App

    10. -
    11. Switch to the Permissions tab. Enable at least the following permissions: account_info.read, files.metadata.write, files.content.write, files.content.read, sharing.write. The files.metadata.read and sharing.read checkboxes will be marked too. Click Submit

    12. -
    13. Switch to the Settings tab. Fill OAuth2 - Redirect URIs as http://localhost:53682/

    14. -
    15. Find the App key and App secret values on the Settings tab. Use these values in rclone config to add a new remote or edit an existing remote. The App key setting corresponds to client_id in rclone config, the App secret corresponds to client_secret

    16. + +Properties: + +- Config: upstreams +- Env Var: RCLONE_COMBINE_UPSTREAMS +- Type: SpaceSepList +- Default: + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + +# Dropbox + +Paths are specified as `remote:path` + +Dropbox paths may be as deep as required, e.g. +`remote:directory/subdirectory`. + +## Configuration + +The initial setup for dropbox involves getting a token from Dropbox +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +
        +
      1. New remote
      2. +
      3. Delete remote
      4. +
      5. Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Dropbox  "dropbox" [snip] Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX -------------------- [remote] app_key = app_secret = token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --------------------
      6. +
      7. Yes this is OK
      8. +
      9. Edit this remote
      10. +
      11. Delete this remote y/e/d> y
      -

      Enterprise File Fabric

      -

      This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.

      -

      Configuration

      -

      The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config walks you through it.

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Enterprise File Fabric
      -   \ "filefabric"
      -[snip]
      -Storage> filefabric
      -** See help for filefabric backend at: https://rclone.org/filefabric/ **
      +
      
      +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
      +machine with no Internet browser available.
      +
      +Note that rclone runs a webserver on your local machine to collect the
      +token as returned from Dropbox. This only
      +runs from the moment it opens your browser to the moment you get back
      +the verification code.  This is on `http://127.0.0.1:53682/` and it
      +may require you to unblock it temporarily if you are running a host
      +firewall, or use manual mode.
      +
      +You can then use it like this,
      +
      +List directories in top level of your dropbox
      +
      +    rclone lsd remote:
      +
      +List all the files in your dropbox
      +
      +    rclone ls remote:
      +
      +To copy a local directory to a dropbox directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Dropbox for business
      +
      +Rclone supports Dropbox for business and Team Folders.
      +
      +When using Dropbox for business `remote:` and `remote:path/to/file`
      +will refer to your personal folder.
      +
      +If you wish to see Team Folders you must use a leading `/` in the
      +path, so `rclone lsd remote:/` will refer to the root and show you all
      +Team Folders and your User Folder.
      +
      +You can then use team folders like this `remote:/TeamFolder` and
      +`remote:/TeamFolder/path/to/file`.
      +
      +A leading `/` for a Dropbox personal account will do nothing, but it
      +will take an extra HTTP transaction so it should be avoided.
      +
      +### Modified time and Hashes
      +
      +Dropbox supports modified times, but the only way to set a
      +modification time is to re-upload the file.
      +
      +This means that if you uploaded your data with an older version of
      +rclone which didn't support the v2 API and modified times, rclone will
      +decide to upload all your old data to fix the modification times.  If
      +you don't want this to happen use `--size-only` or `--checksum` flag
      +to stop it.
      +
      +Dropbox supports [its own hash
      +type](https://www.dropbox.com/developers/reference/content-hash) which
      +is checked for all transfers.
      +
      +### Restricted filename characters
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| NUL       | 0x00  | ␀           |
      +| /         | 0x2F  | /           |
      +| DEL       | 0x7F  | ␡           |
      +| \         | 0x5C  | \           |
      +
      +File names can also not end with the following characters.
      +These only get replaced if they are the last character in the name:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| SP        | 0x20  | ␠           |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +### Batch mode uploads {#batch-mode}
      +
      +Using batch mode uploads is very important for performance when using
      +the Dropbox API. See [the dropbox performance guide](https://developers.dropbox.com/dbx-performance-guide)
      +for more info.
      +
      +There are 3 modes rclone can use for uploads.
      +
      +#### --dropbox-batch-mode off
      +
      +In this mode rclone will not use upload batching. This was the default
      +before rclone v1.55. It has the disadvantage that it is very likely to
      +encounter `too_many_requests` errors like this
      +
      +    NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
      +
      +When rclone receives these it has to wait for 15s or sometimes 300s
      +before continuing which really slows down transfers.
      +
      +This will happen especially if `--transfers` is large, so this mode
      +isn't recommended except for compatibility or investigating problems.
      +
      +#### --dropbox-batch-mode sync
      +
      +In this mode rclone will batch up uploads to the size specified by
      +`--dropbox-batch-size` and commit them together.
      +
      +Using this mode means you can use a much higher `--transfers`
      +parameter (32 or 64 works fine) without receiving `too_many_requests`
      +errors.
      +
      +This mode ensures full data integrity.
      +
      +Note that there may be a pause when quitting rclone while rclone
      +finishes up the last batch using this mode.
      +
      +#### --dropbox-batch-mode async
      +
      +In this mode rclone will batch up uploads to the size specified by
      +`--dropbox-batch-size` and commit them together.
      +
      +However it will not wait for the status of the batch to be returned to
      +the caller. This means rclone can use a much bigger batch size (much
      +bigger than `--transfers`), at the cost of not being able to check the
      +status of the upload.
      +
      +This provides the maximum possible upload speed especially with lots
      +of small files, however rclone can't check the file got uploaded
      +properly using this mode.
      +
      +If you are using this mode then using "rclone check" after the
      +transfer completes is recommended. Or you could do an initial transfer
      +with `--dropbox-batch-mode async` then do a final transfer with
      +`--dropbox-batch-mode sync` (the default).
      +
      +Note that there may be a pause when quitting rclone while rclone
      +finishes up the last batch using this mode.
      +
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to dropbox (Dropbox).
      +
      +#### --dropbox-client-id
      +
      +OAuth Client Id.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_id
      +- Env Var:     RCLONE_DROPBOX_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --dropbox-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_DROPBOX_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to dropbox (Dropbox).
      +
      +#### --dropbox-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_DROPBOX_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --dropbox-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_DROPBOX_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --dropbox-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_DROPBOX_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --dropbox-chunk-size
      +
      +Upload chunk size (< 150Mi).
      +
      +Any files larger than this will be uploaded in chunks of this size.
      +
      +Note that chunks are buffered in memory (one at a time) so rclone can
      +deal with retries.  Setting this larger will increase the speed
      +slightly (at most 10% for 128 MiB in tests) at the cost of using more
      +memory.  It can be set smaller if you are tight on memory.
      +
      +Properties:
      +
      +- Config:      chunk_size
      +- Env Var:     RCLONE_DROPBOX_CHUNK_SIZE
      +- Type:        SizeSuffix
      +- Default:     48Mi
      +
      +#### --dropbox-impersonate
      +
      +Impersonate this user when using a business account.
      +
      +Note that if you want to use impersonate, you should make sure this
      +flag is set when running "rclone config" as this will cause rclone to
      +request the "members.read" scope which it won't normally. This is
      +needed to lookup a members email address into the internal ID that
      +dropbox uses in the API.
      +
      +Using the "members.read" scope will require a Dropbox Team Admin
      +to approve during the OAuth flow.
      +
      +You will have to use your own App (setting your own client_id and
      +client_secret) to use this option as currently rclone's default set of
      +permissions doesn't include "members.read". This can be added once
      +v1.55 or later is in use everywhere.
      +
      +
      +Properties:
      +
      +- Config:      impersonate
      +- Env Var:     RCLONE_DROPBOX_IMPERSONATE
      +- Type:        string
      +- Required:    false
      +
      +#### --dropbox-shared-files
      +
      +Instructs rclone to work on individual shared files.
      +
      +In this mode rclone's features are extremely limited - only list (ls, lsl, etc.) 
      +operations and read operations (e.g. downloading) are supported in this mode.
      +All other operations will be disabled.
      +
      +Properties:
      +
      +- Config:      shared_files
      +- Env Var:     RCLONE_DROPBOX_SHARED_FILES
      +- Type:        bool
      +- Default:     false
      +
      +#### --dropbox-shared-folders
      +
      +Instructs rclone to work on shared folders.
      +            
      +When this flag is used with no path only the List operation is supported and 
      +all available shared folders will be listed. If you specify a path the first part 
      +will be interpreted as the name of shared folder. Rclone will then try to mount this 
      +shared to the root namespace. On success shared folder rclone proceeds normally. 
      +The shared folder is now pretty much a normal folder and all normal operations 
      +are supported. 
      +
      +Note that we don't unmount the shared folder afterwards so the 
      +--dropbox-shared-folders can be omitted after the first use of a particular 
      +shared folder.
      +
      +Properties:
      +
      +- Config:      shared_folders
      +- Env Var:     RCLONE_DROPBOX_SHARED_FOLDERS
      +- Type:        bool
      +- Default:     false
      +
      +#### --dropbox-batch-mode
      +
      +Upload file batching sync|async|off.
      +
      +This sets the batch mode used by rclone.
      +
      +For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
      +
      +This has 3 possible values
      +
      +- off - no batching
      +- sync - batch uploads and check completion (default)
      +- async - batch upload and don't check completion
      +
      +Rclone will close any outstanding batches when it exits which may make
      +a delay on quit.
      +
      +
      +Properties:
      +
      +- Config:      batch_mode
      +- Env Var:     RCLONE_DROPBOX_BATCH_MODE
      +- Type:        string
      +- Default:     "sync"
      +
      +#### --dropbox-batch-size
      +
      +Max number of files in upload batch.
      +
      +This sets the batch size of files to upload. It has to be less than 1000.
      +
      +By default this is 0 which means rclone which calculate the batch size
      +depending on the setting of batch_mode.
      +
      +- batch_mode: async - default batch_size is 100
      +- batch_mode: sync - default batch_size is the same as --transfers
      +- batch_mode: off - not in use
      +
      +Rclone will close any outstanding batches when it exits which may make
      +a delay on quit.
      +
      +Setting this is a great idea if you are uploading lots of small files
      +as it will make them a lot quicker. You can use --transfers 32 to
      +maximise throughput.
      +
      +
      +Properties:
      +
      +- Config:      batch_size
      +- Env Var:     RCLONE_DROPBOX_BATCH_SIZE
      +- Type:        int
      +- Default:     0
      +
      +#### --dropbox-batch-timeout
      +
      +Max time to allow an idle upload batch before uploading.
      +
      +If an upload batch is idle for more than this long then it will be
      +uploaded.
      +
      +The default for this is 0 which means rclone will choose a sensible
      +default based on the batch_mode in use.
      +
      +- batch_mode: async - default batch_timeout is 10s
      +- batch_mode: sync - default batch_timeout is 500ms
      +- batch_mode: off - not in use
      +
      +
      +Properties:
      +
      +- Config:      batch_timeout
      +- Env Var:     RCLONE_DROPBOX_BATCH_TIMEOUT
      +- Type:        Duration
      +- Default:     0s
      +
      +#### --dropbox-batch-commit-timeout
      +
      +Max time to wait for a batch to finish committing
      +
      +Properties:
      +
      +- Config:      batch_commit_timeout
      +- Env Var:     RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT
      +- Type:        Duration
      +- Default:     10m0s
      +
      +#### --dropbox-pacer-min-sleep
      +
      +Minimum time to sleep between API calls.
      +
      +Properties:
      +
      +- Config:      pacer_min_sleep
      +- Env Var:     RCLONE_DROPBOX_PACER_MIN_SLEEP
      +- Type:        Duration
      +- Default:     10ms
      +
      +#### --dropbox-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_DROPBOX_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +Note that Dropbox is case insensitive so you can't have a file called
      +"Hello.doc" and one called "hello.doc".
      +
      +There are some file names such as `thumbs.db` which Dropbox can't
      +store.  There is a full list of them in the ["Ignored Files" section
      +of this document](https://www.dropbox.com/en/help/145).  Rclone will
      +issue an error message `File name disallowed - not uploading` if it
      +attempts to upload one of those file names, but the sync won't fail.
      +
      +Some errors may occur if you try to sync copyright-protected files
      +because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that
      +prevents this sort of file being downloaded. This will return the error `ERROR :
      +/path/to/your/file: Failed to copy: failed to open source object:
      +path/restricted_content/.`
      +
      +If you have more than 10,000 files in a directory then `rclone purge
      +dropbox:dir` will return the error `Failed to purge: There are too
      +many files involved in this operation`.  As a work-around do an
      +`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`.
      +
      +When using `rclone link` you'll need to set `--expire` if using a
      +non-personal account otherwise the visibility may not be correct.
      +(Note that `--expire` isn't supported on personal accounts). See the
      +[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the 
      +[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
      +
      +## Get your own Dropbox App ID
      +
      +When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
      +
      +Here is how to create your own Dropbox App ID for rclone:
      +
      +1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not
      +to be the same account as the Dropbox you want to access)
      +
      +2. Choose an API => Usually this should be `Dropbox API`
      +
      +3. Choose the type of access you want to use => `Full Dropbox` or `App Folder`. If you want to use Team Folders, `Full Dropbox` is required ([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)).
      +
      +4. Name your App. The app name is global, so you can't use `rclone` for example
      +
      +5. Click the button `Create App`
      +
      +6. Switch to the `Permissions` tab. Enable at least the following permissions: `account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, `sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be marked too. Click `Submit`
      +
      +7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` and click on `Add`
      +
      +8. Find the `App key` and `App secret` values on the `Settings` tab. Use these values in rclone config to add a new remote or edit an existing remote. The `App key` setting corresponds to `client_id` in rclone config, the `App secret` corresponds to `client_secret`
      +
      +#  Enterprise File Fabric
      +
      +This backend supports [Storage Made Easy's Enterprise File
      +Fabric™](https://storagemadeeasy.com/about/) which provides a software
      +solution to integrate and unify File and Object Storage accessible
      +through a global file system.
      +
      +## Configuration
      +
      +The initial setup for the Enterprise File Fabric backend involves
      +getting a token from the Enterprise File Fabric which you need to
      +do in your browser.  `rclone config` walks you through it.
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Enterprise File Fabric  "filefabric" [snip] Storage> filefabric ** See help for filefabric backend at: https://rclone.org/filefabric/ **

      +

      URL of the Enterprise File Fabric to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Storage Made Easy US  "https://storagemadeeasy.com" 2 / Storage Made Easy EU  "https://eu.storagemadeeasy.com" 3 / Connect to your Enterprise File Fabric  "https://yourfabric.smestorage.com" url> https://yourfabric.smestorage.com/ ID of the root folder Leave blank normally.

      +

      Fill in to make rclone start with directory of a given ID.

      +

      Enter a string value. Press Enter for the default (""). root_folder_id> Permanent Authentication Token

      +

      A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry you'll see called "My Authentication Tokens". Click the Manage button to create one.

      +

      These tokens are normally valid for several years.

      +

      For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens

      +

      Enter a string value. Press Enter for the default (""). permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [remote] type = filefabric url = https://yourfabric.smestorage.com/ permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +Once configured you can then use `rclone` like this,
      +
      +List directories in top level of your Enterprise File Fabric
      +
      +    rclone lsd remote:
      +
      +List all the files in your Enterprise File Fabric
      +
      +    rclone ls remote:
      +
      +To copy a local directory to an Enterprise File Fabric directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Modified time and hashes
      +
      +The Enterprise File Fabric allows modification times to be set on
      +files accurate to 1 second.  These will be used to detect whether
      +objects need syncing or not.
      +
      +The Enterprise File Fabric does not support any data hashes at this time.
      +
      +### Restricted filename characters
      +
      +The [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +will be replaced.
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +### Empty files
      +
      +Empty files aren't supported by the Enterprise File Fabric. Rclone will therefore
      +upload an empty file as a single space with a mime type of
      +`application/vnd.rclone.empty.file` and files with that mime type are
      +treated as empty.
      +
      +### Root folder ID ###
      +
      +You can set the `root_folder_id` for rclone.  This is the directory
      +(identified by its `Folder ID`) that rclone considers to be the root
      +of your Enterprise File Fabric.
      +
      +Normally you will leave this blank and rclone will determine the
      +correct root to use itself.
      +
      +However you can set this to restrict rclone to a specific folder
      +hierarchy.
      +
      +In order to do this you will have to find the `Folder ID` of the
      +directory you wish rclone to display.  These aren't displayed in the
      +web interface, but you can use `rclone lsf` to find them, for example
      +
      +

      $ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ 120673759,My Quick Uploads/ 120673755,My Syncs/ 120673756,My backups/ 120673757,My contacts/ 120673761,S3 Storage/

      +
      
      +The ID for "S3 Storage" would be `120673761`.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to filefabric (Enterprise File Fabric).
      +
      +#### --filefabric-url
      +
      +URL of the Enterprise File Fabric to connect to.
      +
      +Properties:
      +
      +- Config:      url
      +- Env Var:     RCLONE_FILEFABRIC_URL
      +- Type:        string
      +- Required:    true
      +- Examples:
      +    - "https://storagemadeeasy.com"
      +        - Storage Made Easy US
      +    - "https://eu.storagemadeeasy.com"
      +        - Storage Made Easy EU
      +    - "https://yourfabric.smestorage.com"
      +        - Connect to your Enterprise File Fabric
      +
      +#### --filefabric-root-folder-id
      +
      +ID of the root folder.
       
      -URL of the Enterprise File Fabric to connect to
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      - 1 / Storage Made Easy US
      -   \ "https://storagemadeeasy.com"
      - 2 / Storage Made Easy EU
      -   \ "https://eu.storagemadeeasy.com"
      - 3 / Connect to your Enterprise File Fabric
      -   \ "https://yourfabric.smestorage.com"
      -url> https://yourfabric.smestorage.com/
      -ID of the root folder
       Leave blank normally.
       
       Fill in to make rclone start with directory of a given ID.
       
      -Enter a string value. Press Enter for the default ("").
      -root_folder_id> 
      -Permanent Authentication Token
      +
      +Properties:
      +
      +- Config:      root_folder_id
      +- Env Var:     RCLONE_FILEFABRIC_ROOT_FOLDER_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --filefabric-permanent-token
      +
      +Permanent Authentication Token.
       
       A Permanent Authentication Token can be created in the Enterprise File
       Fabric, on the users Dashboard under Security, there is an entry
      @@ -19206,8779 +22246,11176 @@ These tokens are normally valid for several years.
       
       For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens
       
      -Enter a string value. Press Enter for the default ("").
      -permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx
      -Edit advanced config? (y/n)
      -y) Yes
      -n) No (default)
      -y/n> n
      -Remote config
      ---------------------
      -[remote]
      -type = filefabric
      -url = https://yourfabric.smestorage.com/
      -permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Once configured you can then use rclone like this,

      -

      List directories in top level of your Enterprise File Fabric

      -
      rclone lsd remote:
      -

      List all the files in your Enterprise File Fabric

      -
      rclone ls remote:
      -

      To copy a local directory to an Enterprise File Fabric directory called backup

      -
      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      -

      The Enterprise File Fabric allows modification times to be set on files accurate to 1 second. These will be used to detect whether objects need syncing or not.

      -

      The Enterprise File Fabric does not support any data hashes at this time.

      -

      Restricted filename characters

      -

      The default restricted characters set will be replaced.

      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Empty files

      -

      Empty files aren't supported by the Enterprise File Fabric. Rclone will therefore upload an empty file as a single space with a mime type of application/vnd.rclone.empty.file and files with that mime type are treated as empty.

      -

      Root folder ID

      -

      You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your Enterprise File Fabric.

      -

      Normally you will leave this blank and rclone will determine the correct root to use itself.

      -

      However you can set this to restrict rclone to a specific folder hierarchy.

      -

      In order to do this you will have to find the Folder ID of the directory you wish rclone to display. These aren't displayed in the web interface, but you can use rclone lsf to find them, for example

      -
      $ rclone lsf --dirs-only -Fip --csv filefabric:
      -120673758,Burnt PDFs/
      -120673759,My Quick Uploads/
      -120673755,My Syncs/
      -120673756,My backups/
      -120673757,My contacts/
      -120673761,S3 Storage/
      -

      The ID for "S3 Storage" would be 120673761.

      -

      Standard options

      -

      Here are the Standard options specific to filefabric (Enterprise File Fabric).

      -

      --filefabric-url

      -

      URL of the Enterprise File Fabric to connect to.

      -

      Properties:

      -
        -
      • Config: url
      • -
      • Env Var: RCLONE_FILEFABRIC_URL
      • -
      • Type: string
      • -
      • Required: true
      • -
      • Examples: -
          -
        • "https://storagemadeeasy.com" -
            -
          • Storage Made Easy US
          • -
        • -
        • "https://eu.storagemadeeasy.com" -
            -
          • Storage Made Easy EU
          • -
        • -
        • "https://yourfabric.smestorage.com" -
            -
          • Connect to your Enterprise File Fabric
          • -
        • -
      • -
      -

      --filefabric-root-folder-id

      -

      ID of the root folder.

      -

      Leave blank normally.

      -

      Fill in to make rclone start with directory of a given ID.

      -

      Properties:

      -
        -
      • Config: root_folder_id
      • -
      • Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --filefabric-permanent-token

      -

      Permanent Authentication Token.

      -

      A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry you'll see called "My Authentication Tokens". Click the Manage button to create one.

      -

      These tokens are normally valid for several years.

      -

      For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens

      -

      Properties:

      -
        -
      • Config: permanent_token
      • -
      • Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to filefabric (Enterprise File Fabric).

      -

      --filefabric-token

      -

      Session Token.

      -

      This is a session token which rclone caches in the config file. It is usually valid for 1 hour.

      -

      Don't set this value - rclone will set it automatically.

      -

      Properties:

      -
        -
      • Config: token
      • -
      • Env Var: RCLONE_FILEFABRIC_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --filefabric-token-expiry

      -

      Token expiry time.

      -

      Don't set this value - rclone will set it automatically.

      -

      Properties:

      -
        -
      • Config: token_expiry
      • -
      • Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --filefabric-version

      -

      Version read from the file fabric.

      -

      Don't set this value - rclone will set it automatically.

      -

      Properties:

      -
        -
      • Config: version
      • -
      • Env Var: RCLONE_FILEFABRIC_VERSION
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --filefabric-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_FILEFABRIC_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      FTP

      -

      FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.

      -

      Limitations of Rclone's FTP backend

      -

      Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

      -

      Configuration

      -

      To create an FTP configuration named remote, run

      -
      rclone config
      -

      Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.

      -
      No remotes found, make a new one?
      -n) New remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -n/r/c/s/q> n
      -name> remote
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / FTP
      -   \ "ftp"
      -[snip]
      -Storage> ftp
      -** See help for ftp backend at: https://rclone.org/ftp/ **
       
      -FTP host to connect to
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      - 1 / Connect to ftp.example.com
      -   \ "ftp.example.com"
      -host> ftp.example.com
      -FTP username
      -Enter a string value. Press Enter for the default ("$USER").
      -user> 
      -FTP port number
      -Enter a signed integer. Press Enter for the default (21).
      -port> 
      -FTP password
      -y) Yes type in my own password
      -g) Generate random password
      -y/g> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      -Use FTP over TLS (Implicit)
      -Enter a boolean value (true or false). Press Enter for the default ("false").
      -tls> 
      -Use FTP over TLS (Explicit)
      -Enter a boolean value (true or false). Press Enter for the default ("false").
      -explicit_tls> 
      -Remote config
      ---------------------
      -[remote]
      -type = ftp
      -host = ftp.example.com
      -pass = *** ENCRYPTED ***
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      To see all directories in the home directory of remote

      -
      rclone lsd remote:
      -

      Make a new directory

      -
      rclone mkdir remote:path/to/directory
      -

      List the contents of a directory

      -
      rclone ls remote:path/to/directory
      -

      Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

      -
      rclone sync --interactive /home/local/directory remote:directory
      -

      Anonymous FTP

      -

      When connecting to a FTP server that allows anonymous login, you can use the special "anonymous" username. Traditionally, this user account accepts any string as a password, although it is common to use either the password "anonymous" or "guest". Some servers require the use of a valid e-mail address as password.

      -

      Using on-the-fly or connection string remotes makes it easy to access such servers, without requiring any configuration in advance. The following are examples of that:

      -
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
      -rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
      -

      The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt. They execute the rclone obscure command to create a password string in the format required by the pass option. The following examples are exactly the same, except use an already obscured string representation of the same password "dummy", and therefore works even in Windows Command Prompt:

      -
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
      -rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
      -

      Implicit TLS

      -

      Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the FTP backend config for the remote, or with --ftp-tls. The default FTPS port is 990, not 21 and can be set with --ftp-port.

      -

      Restricted filename characters

      -

      In addition to the default restricted characters set the following characters are also replaced:

      -

      File names cannot end with the following characters. Replacement is limited to the last character in a file name:

      - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      SP0x20
      -

      Not all FTP servers can have all characters in file names, for example:

      - - - - - - - - - - - - - - - - - -
      FTP ServerForbidden characters
      proftpd*
      pureftpd\ [ ]
      -

      This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.

      -

      Standard options

      -

      Here are the Standard options specific to ftp (FTP).

      -

      --ftp-host

      -

      FTP host to connect to.

      -

      E.g. "ftp.example.com".

      -

      Properties:

      -
        -
      • Config: host
      • -
      • Env Var: RCLONE_FTP_HOST
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --ftp-user

      -

      FTP username.

      -

      Properties:

      -
        -
      • Config: user
      • -
      • Env Var: RCLONE_FTP_USER
      • -
      • Type: string
      • -
      • Default: "$USER"
      • -
      -

      --ftp-port

      -

      FTP port number.

      -

      Properties:

      -
        -
      • Config: port
      • -
      • Env Var: RCLONE_FTP_PORT
      • -
      • Type: int
      • -
      • Default: 21
      • -
      -

      --ftp-pass

      -

      FTP password.

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: pass
      • -
      • Env Var: RCLONE_FTP_PASS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --ftp-tls

      -

      Use Implicit FTPS (FTP over TLS).

      -

      When using implicit FTP over TLS the client connects using TLS right from the start which breaks compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTPS.

      -

      Properties:

      -
        -
      • Config: tls
      • -
      • Env Var: RCLONE_FTP_TLS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-explicit-tls

      -

      Use Explicit FTPS (FTP over TLS).

      -

      When using explicit FTP over TLS the client explicitly requests security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTPS.

      -

      Properties:

      -
        -
      • Config: explicit_tls
      • -
      • Env Var: RCLONE_FTP_EXPLICIT_TLS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to ftp (FTP).

      -

      --ftp-concurrency

      -

      Maximum number of FTP simultaneous connections, 0 for unlimited.

      -

      Note that setting this is very likely to cause deadlocks so it should be used with care.

      -

      If you are doing a sync or copy then make sure concurrency is one more than the sum of --transfers and --checkers.

      -

      If you use --check-first then it just needs to be one more than the maximum of --checkers and --transfers.

      -

      So for concurrency 3 you'd use --checkers 2 --transfers 2 --check-first or --checkers 1 --transfers 1.

      -

      Properties:

      -
        -
      • Config: concurrency
      • -
      • Env Var: RCLONE_FTP_CONCURRENCY
      • -
      • Type: int
      • -
      • Default: 0
      • -
      -

      --ftp-no-check-certificate

      -

      Do not verify the TLS certificate of the server.

      -

      Properties:

      -
        -
      • Config: no_check_certificate
      • -
      • Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-disable-epsv

      -

      Disable using EPSV even if server advertises support.

      -

      Properties:

      -
        -
      • Config: disable_epsv
      • -
      • Env Var: RCLONE_FTP_DISABLE_EPSV
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-disable-mlsd

      -

      Disable using MLSD even if server advertises support.

      -

      Properties:

      -
        -
      • Config: disable_mlsd
      • -
      • Env Var: RCLONE_FTP_DISABLE_MLSD
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-disable-utf8

      -

      Disable using UTF-8 even if server advertises support.

      -

      Properties:

      -
        -
      • Config: disable_utf8
      • -
      • Env Var: RCLONE_FTP_DISABLE_UTF8
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-writing-mdtm

      -

      Use MDTM to set modification time (VsFtpd quirk)

      -

      Properties:

      -
        -
      • Config: writing_mdtm
      • -
      • Env Var: RCLONE_FTP_WRITING_MDTM
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-force-list-hidden

      -

      Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD.

      -

      Properties:

      -
        -
      • Config: force_list_hidden
      • -
      • Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-idle-timeout

      -

      Max time before closing idle connections.

      -

      If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.

      -

      Set to 0 to keep connections indefinitely.

      -

      Properties:

      -
        -
      • Config: idle_timeout
      • -
      • Env Var: RCLONE_FTP_IDLE_TIMEOUT
      • -
      • Type: Duration
      • -
      • Default: 1m0s
      • -
      -

      --ftp-close-timeout

      -

      Maximum time to wait for a response to close.

      -

      Properties:

      -
        -
      • Config: close_timeout
      • -
      • Env Var: RCLONE_FTP_CLOSE_TIMEOUT
      • -
      • Type: Duration
      • -
      • Default: 1m0s
      • -
      -

      --ftp-tls-cache-size

      -

      Size of TLS session cache for all control and data connections.

      -

      TLS cache allows to resume TLS sessions and reuse PSK between connections. Increase if default size is not enough resulting in TLS resumption errors. Enabled by default. Use 0 to disable.

      -

      Properties:

      -
        -
      • Config: tls_cache_size
      • -
      • Env Var: RCLONE_FTP_TLS_CACHE_SIZE
      • -
      • Type: int
      • -
      • Default: 32
      • -
      -

      --ftp-disable-tls13

      -

      Disable TLS 1.3 (workaround for FTP servers with buggy TLS)

      -

      Properties:

      -
        -
      • Config: disable_tls13
      • -
      • Env Var: RCLONE_FTP_DISABLE_TLS13
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-shut-timeout

      -

      Maximum time to wait for data connection closing status.

      -

      Properties:

      -
        -
      • Config: shut_timeout
      • -
      • Env Var: RCLONE_FTP_SHUT_TIMEOUT
      • -
      • Type: Duration
      • -
      • Default: 1m0s
      • -
      -

      --ftp-ask-password

      -

      Allow asking for FTP password when needed.

      -

      If this is set and no password is supplied then rclone will ask for a password

      -

      Properties:

      -
        -
      • Config: ask_password
      • -
      • Env Var: RCLONE_FTP_ASK_PASSWORD
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --ftp-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_FTP_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,Del,Ctl,RightSpace,Dot
      • -
      • Examples: -
          -
        • "Asterisk,Ctl,Dot,Slash" -
            -
          • ProFTPd can't handle '*' in file names
          • -
        • -
        • "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" -
            -
          • PureFTPd can't handle '[]' or '*' in file names
          • -
        • -
        • "Ctl,LeftPeriod,Slash" -
            -
          • VsFTPd can't handle file names starting with dot
          • -
        • -
      • -
      -

      Limitations

      -

      FTP servers acting as rclone remotes must support passive mode. The mode cannot be configured as passive is the only supported one. Rclone's FTP implementation is not compatible with active mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.

      -

      Rclone's FTP backend does not support any checksums but can compare file sizes.

      -

      rclone about is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      -

      See List of backends that do not support rclone about and rclone about

      -

      The implementation of : --dump headers, --dump bodies, --dump auth for debugging isn't the same as for rclone HTTP based backends - it has less fine grained control.

      -

      --timeout isn't supported (but --contimeout is).

      -

      --bind isn't supported.

      -

      Rclone's FTP backend could support server-side move but does not at present.

      -

      The ftp_proxy environment variable is not currently supported.

      -

      Modified time

      -

      File modification time (timestamps) is supported to 1 second resolution for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server. The VsFTPd server has non-standard implementation of time related protocol commands and needs a special configuration setting: writing_mdtm = true.

      -

      Support for precise file time with other FTP servers varies depending on what protocol extensions they advertise. If all the MLSD, MDTM and MFTM extensions are present, rclone will use them together to provide precise time. Otherwise the times you see on the FTP server through rclone are those of the last file upload.

      -

      You can use the following command to check whether rclone can use precise time with your FTP server: rclone backend features your_ftp_remote: (the trailing colon is important). Look for the number in the line tagged by Precision designating the remote time precision expressed as nanoseconds. A value of 1000000000 means that file time precision of 1 second is available. A value of 3153600000000000000 (or another large number) means "unsupported".

      -

      Google Cloud Storage

      -

      Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

      -

      Configuration

      -

      The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      n) New remote
      -d) Delete remote
      -q) Quit config
      -e/n/d/q> n
      -name> remote
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Google Cloud Storage (this is not Google Drive)
      -   \ "google cloud storage"
      -[snip]
      -Storage> google cloud storage
      -Google Application Client Id - leave blank normally.
      -client_id>
      -Google Application Client Secret - leave blank normally.
      -client_secret>
      -Project number optional - needed only for list/create/delete buckets - see your developer console.
      -project_number> 12345678
      -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
      -service_account_file>
      -Access Control List for new objects.
      -Choose a number from below, or type in your own value
      - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
      -   \ "authenticatedRead"
      - 2 / Object owner gets OWNER access, and project team owners get OWNER access.
      -   \ "bucketOwnerFullControl"
      - 3 / Object owner gets OWNER access, and project team owners get READER access.
      -   \ "bucketOwnerRead"
      - 4 / Object owner gets OWNER access [default if left blank].
      -   \ "private"
      - 5 / Object owner gets OWNER access, and project team members get access according to their roles.
      -   \ "projectPrivate"
      - 6 / Object owner gets OWNER access, and all Users get READER access.
      -   \ "publicRead"
      -object_acl> 4
      -Access Control List for new buckets.
      -Choose a number from below, or type in your own value
      - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
      -   \ "authenticatedRead"
      - 2 / Project team owners get OWNER access [default if left blank].
      -   \ "private"
      - 3 / Project team members get access according to their roles.
      -   \ "projectPrivate"
      - 4 / Project team owners get OWNER access, and all Users get READER access.
      -   \ "publicRead"
      - 5 / Project team owners get OWNER access, and all Users get WRITER access.
      -   \ "publicReadWrite"
      -bucket_acl> 2
      -Location for the newly created buckets.
      -Choose a number from below, or type in your own value
      - 1 / Empty for default location (US).
      -   \ ""
      - 2 / Multi-regional location for Asia.
      -   \ "asia"
      - 3 / Multi-regional location for Europe.
      -   \ "eu"
      - 4 / Multi-regional location for United States.
      -   \ "us"
      - 5 / Taiwan.
      -   \ "asia-east1"
      - 6 / Tokyo.
      -   \ "asia-northeast1"
      - 7 / Singapore.
      -   \ "asia-southeast1"
      - 8 / Sydney.
      -   \ "australia-southeast1"
      - 9 / Belgium.
      -   \ "europe-west1"
      -10 / London.
      -   \ "europe-west2"
      -11 / Iowa.
      -   \ "us-central1"
      -12 / South Carolina.
      -   \ "us-east1"
      -13 / Northern Virginia.
      -   \ "us-east4"
      -14 / Oregon.
      -   \ "us-west1"
      -location> 12
      -The storage class to use when storing objects in Google Cloud Storage.
      -Choose a number from below, or type in your own value
      - 1 / Default
      -   \ ""
      - 2 / Multi-regional storage class
      -   \ "MULTI_REGIONAL"
      - 3 / Regional storage class
      -   \ "REGIONAL"
      - 4 / Nearline storage class
      -   \ "NEARLINE"
      - 5 / Coldline storage class
      -   \ "COLDLINE"
      - 6 / Durable reduced availability storage class
      -   \ "DURABLE_REDUCED_AVAILABILITY"
      -storage_class> 5
      -Remote config
      -Use web browser to automatically authenticate rclone with remote?
      - * Say Y if the machine running rclone has a web browser you can use
      - * Say N if running rclone on a (remote) machine without web browser access
      -If not sure try Y. If Y failed, try N.
      -y) Yes
      -n) No
      -y/n> y
      -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
      -Log in and authorize rclone for access
      -Waiting for code...
      -Got code
      ---------------------
      -[remote]
      -type = google cloud storage
      -client_id =
      -client_secret =
      -token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
      -project_number = 12345678
      -object_acl = private
      -bucket_acl = private
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      See the remote setup docs for how to set it up on a machine with no Internet browser available.

      -

      Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

      -

      This remote is called remote and can now be used like this

      -

      See all the buckets in your project

      -
      rclone lsd remote:
      -

      Make a new bucket

      -
      rclone mkdir remote:bucket
      -

      List the contents of a bucket

      -
      rclone ls remote:bucket
      -

      Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

      -
      rclone sync --interactive /home/local/directory remote:bucket
      -

      Service Account support

      -

      You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      -

      To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      -

      Anonymous Access

      -

      For downloads of objects that permit public access you can configure rclone to use anonymous access by setting anonymous to true. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access.

      -

      Application Default Credentials

      -

      If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page.

      -

      Note that in the case application default credentials are used, there is no need to explicitly configure a project number.

      -

      --fast-list

      -

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      -

      Custom upload headers

      -

      You can set custom upload headers with the --header-upload flag. Google Cloud Storage supports the headers as described in the working with metadata documentation

      -
        -
      • Cache-Control
      • -
      • Content-Disposition
      • -
      • Content-Encoding
      • -
      • Content-Language
      • -
      • Content-Type
      • -
      • X-Goog-Storage-Class
      • -
      • X-Goog-Meta-
      • -
      -

      Eg --header-upload "Content-Type text/potato"

      -

      Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"

      -

      Modification time

      -

      Google Cloud Storage stores md5sum natively. Google's gsutil tool stores modification time with one-second precision as goog-reserved-file-mtime in file metadata.

      -

      To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. mtime uses RFC3339 format with one-nanosecond precision. goog-reserved-file-mtime uses Unix timestamp format with one-second precision. To get modification time from object metadata, rclone reads the metadata in the following order: mtime, goog-reserved-file-mtime, object updated time.

      -

      Note that rclone's default modify window is 1ns. Files uploaded by gsutil only contain timestamps with one-second precision. If you use rclone to sync files previously uploaded by gsutil, rclone will attempt to update modification time for all these files. To avoid these possibly unnecessary updates, use --modify-window 1s.

      -

      Restricted filename characters

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      NUL0x00
      LF0x0A
      CR0x0D
      /0x2F
      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      -

      Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

      -

      --gcs-client-id

      -

      OAuth Client Id.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_id
      • -
      • Env Var: RCLONE_GCS_CLIENT_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-client-secret

      -

      OAuth Client Secret.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_secret
      • -
      • Env Var: RCLONE_GCS_CLIENT_SECRET
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-project-number

      -

      Project number.

      -

      Optional - needed only for list/create/delete buckets - see your developer console.

      -

      Properties:

      -
        -
      • Config: project_number
      • -
      • Env Var: RCLONE_GCS_PROJECT_NUMBER
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-user-project

      -

      User project.

      -

      Optional - needed only for requester pays.

      -

      Properties:

      -
        -
      • Config: user_project
      • -
      • Env Var: RCLONE_GCS_USER_PROJECT
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-service-account-file

      -

      Service Account Credentials JSON file path.

      -

      Leave blank normally. Needed only if you want use SA instead of interactive login.

      -

      Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

      -

      Properties:

      -
        -
      • Config: service_account_file
      • -
      • Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-service-account-credentials

      -

      Service Account Credentials JSON blob.

      -

      Leave blank normally. Needed only if you want use SA instead of interactive login.

      -

      Properties:

      -
        -
      • Config: service_account_credentials
      • -
      • Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-anonymous

      -

      Access public buckets and objects without credentials.

      -

      Set to 'true' if you just want to download files and don't configure credentials.

      -

      Properties:

      -
        -
      • Config: anonymous
      • -
      • Env Var: RCLONE_GCS_ANONYMOUS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gcs-object-acl

      -

      Access Control List for new objects.

      -

      Properties:

      -
        -
      • Config: object_acl
      • -
      • Env Var: RCLONE_GCS_OBJECT_ACL
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "authenticatedRead" -
            -
          • Object owner gets OWNER access.
          • -
          • All Authenticated Users get READER access.
          • -
        • -
        • "bucketOwnerFullControl" -
            -
          • Object owner gets OWNER access.
          • -
          • Project team owners get OWNER access.
          • -
        • -
        • "bucketOwnerRead" -
            -
          • Object owner gets OWNER access.
          • -
          • Project team owners get READER access.
          • -
        • -
        • "private" -
            -
          • Object owner gets OWNER access.
          • -
          • Default if left blank.
          • -
        • -
        • "projectPrivate" -
            -
          • Object owner gets OWNER access.
          • -
          • Project team members get access according to their roles.
          • -
        • -
        • "publicRead" -
            -
          • Object owner gets OWNER access.
          • -
          • All Users get READER access.
          • -
        • -
      • -
      -

      --gcs-bucket-acl

      -

      Access Control List for new buckets.

      -

      Properties:

      -
        -
      • Config: bucket_acl
      • -
      • Env Var: RCLONE_GCS_BUCKET_ACL
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "authenticatedRead" -
            -
          • Project team owners get OWNER access.
          • -
          • All Authenticated Users get READER access.
          • -
        • -
        • "private" -
            -
          • Project team owners get OWNER access.
          • -
          • Default if left blank.
          • -
        • -
        • "projectPrivate" -
            -
          • Project team members get access according to their roles.
          • -
        • -
        • "publicRead" -
            -
          • Project team owners get OWNER access.
          • -
          • All Users get READER access.
          • -
        • -
        • "publicReadWrite" -
            -
          • Project team owners get OWNER access.
          • -
          • All Users get WRITER access.
          • -
        • -
      • -
      -

      --gcs-bucket-policy-only

      -

      Access checks should use bucket-level IAM policies.

      -

      If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.

      -

      When it is set, rclone:

      -
        -
      • ignores ACLs set on buckets
      • -
      • ignores ACLs set on objects
      • -
      • creates buckets with Bucket Policy Only set
      • -
      -

      Docs: https://cloud.google.com/storage/docs/bucket-policy-only

      -

      Properties:

      -
        -
      • Config: bucket_policy_only
      • -
      • Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gcs-location

      -

      Location for the newly created buckets.

      -

      Properties:

      -
        -
      • Config: location
      • -
      • Env Var: RCLONE_GCS_LOCATION
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "" -
            -
          • Empty for default location (US)
          • -
        • -
        • "asia" -
            -
          • Multi-regional location for Asia
          • -
        • -
        • "eu" -
            -
          • Multi-regional location for Europe
          • -
        • -
        • "us" -
            -
          • Multi-regional location for United States
          • -
        • -
        • "asia-east1" -
            -
          • Taiwan
          • -
        • -
        • "asia-east2" -
            -
          • Hong Kong
          • -
        • -
        • "asia-northeast1" -
            -
          • Tokyo
          • -
        • -
        • "asia-northeast2" -
            -
          • Osaka
          • -
        • -
        • "asia-northeast3" -
            -
          • Seoul
          • -
        • -
        • "asia-south1" -
            -
          • Mumbai
          • -
        • -
        • "asia-south2" -
            -
          • Delhi
          • -
        • -
        • "asia-southeast1" -
            -
          • Singapore
          • -
        • -
        • "asia-southeast2" -
            -
          • Jakarta
          • -
        • -
        • "australia-southeast1" -
            -
          • Sydney
          • -
        • -
        • "australia-southeast2" -
            -
          • Melbourne
          • -
        • -
        • "europe-north1" -
            -
          • Finland
          • -
        • -
        • "europe-west1" -
            -
          • Belgium
          • -
        • -
        • "europe-west2" -
            -
          • London
          • -
        • -
        • "europe-west3" -
            -
          • Frankfurt
          • -
        • -
        • "europe-west4" -
            -
          • Netherlands
          • -
        • -
        • "europe-west6" -
            -
          • Zürich
          • -
        • -
        • "europe-central2" -
            -
          • Warsaw
          • -
        • -
        • "us-central1" -
            -
          • Iowa
          • -
        • -
        • "us-east1" -
            -
          • South Carolina
          • -
        • -
        • "us-east4" -
            -
          • Northern Virginia
          • -
        • -
        • "us-west1" -
            -
          • Oregon
          • -
        • -
        • "us-west2" -
            -
          • California
          • -
        • -
        • "us-west3" -
            -
          • Salt Lake City
          • -
        • -
        • "us-west4" -
            -
          • Las Vegas
          • -
        • -
        • "northamerica-northeast1" -
            -
          • Montréal
          • -
        • -
        • "northamerica-northeast2" -
            -
          • Toronto
          • -
        • -
        • "southamerica-east1" -
            -
          • São Paulo
          • -
        • -
        • "southamerica-west1" -
            -
          • Santiago
          • -
        • -
        • "asia1" -
            -
          • Dual region: asia-northeast1 and asia-northeast2.
          • -
        • -
        • "eur4" -
            -
          • Dual region: europe-north1 and europe-west4.
          • -
        • -
        • "nam4" -
            -
          • Dual region: us-central1 and us-east1.
          • -
        • -
      • -
      -

      --gcs-storage-class

      -

      The storage class to use when storing objects in Google Cloud Storage.

      -

      Properties:

      -
        -
      • Config: storage_class
      • -
      • Env Var: RCLONE_GCS_STORAGE_CLASS
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "" -
            -
          • Default
          • -
        • -
        • "MULTI_REGIONAL" -
            -
          • Multi-regional storage class
          • -
        • -
        • "REGIONAL" -
            -
          • Regional storage class
          • -
        • -
        • "NEARLINE" -
            -
          • Nearline storage class
          • -
        • -
        • "COLDLINE" -
            -
          • Coldline storage class
          • -
        • -
        • "ARCHIVE" -
            -
          • Archive storage class
          • -
        • -
        • "DURABLE_REDUCED_AVAILABILITY" -
            -
          • Durable reduced availability storage class
          • -
        • -
      • -
      -

      --gcs-env-auth

      -

      Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).

      -

      Only applies if service_account_file and service_account_credentials is blank.

      -

      Properties:

      -
        -
      • Config: env_auth
      • -
      • Env Var: RCLONE_GCS_ENV_AUTH
      • -
      • Type: bool
      • -
      • Default: false
      • -
      • Examples: -
          -
        • "false" -
            -
          • Enter credentials in the next step.
          • -
        • -
        • "true" -
            -
          • Get GCP IAM credentials from the environment (env vars or IAM).
          • -
        • -
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

      -

      --gcs-token

      -

      OAuth Access Token as a JSON blob.

      -

      Properties:

      -
        -
      • Config: token
      • -
      • Env Var: RCLONE_GCS_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-auth-url

      -

      Auth server URL.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: auth_url
      • -
      • Env Var: RCLONE_GCS_AUTH_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-token-url

      -

      Token server url.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: token_url
      • -
      • Env Var: RCLONE_GCS_TOKEN_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-directory-markers

      -

      Upload an empty object with a trailing slash when a new directory is created

      -

      Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with "/", to persist the folder.

      -

      Properties:

      -
        -
      • Config: directory_markers
      • -
      • Env Var: RCLONE_GCS_DIRECTORY_MARKERS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gcs-no-check-bucket

      -

      If set, don't attempt to check the bucket exists or create it.

      -

      This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.

      -

      Properties:

      -
        -
      • Config: no_check_bucket
      • -
      • Env Var: RCLONE_GCS_NO_CHECK_BUCKET
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gcs-decompress

      -

      If set this will decompress gzip encoded objects.

      -

      It is possible to upload objects to GCS with "Content-Encoding: gzip" set. Normally rclone will download these files as compressed objects.

      -

      If this flag is set then rclone will decompress these files with "Content-Encoding: gzip" as they are received. This means that rclone can't check the size and hash but the file contents will be decompressed.

      -

      Properties:

      -
        -
      • Config: decompress
      • -
      • Env Var: RCLONE_GCS_DECOMPRESS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gcs-endpoint

      -

      Endpoint for the service.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: endpoint
      • -
      • Env Var: RCLONE_GCS_ENDPOINT
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gcs-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_GCS_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,CrLf,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      rclone about is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      -

      See List of backends that do not support rclone about and rclone about

      -

      Google Drive

      -

      Paths are specified as drive:path

      -

      Drive paths may be as deep as required, e.g. drive:directory/subdirectory.

      -

      Configuration

      -

      The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -n/r/c/s/q> n
      -name> remote
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Google Drive
      -   \ "drive"
      -[snip]
      -Storage> drive
      -Google Application Client Id - leave blank normally.
      -client_id>
      -Google Application Client Secret - leave blank normally.
      -client_secret>
      -Scope that rclone should use when requesting access from drive.
      -Choose a number from below, or type in your own value
      - 1 / Full access all files, excluding Application Data Folder.
      -   \ "drive"
      - 2 / Read-only access to file metadata and file contents.
      -   \ "drive.readonly"
      -   / Access to files created by rclone only.
      - 3 | These are visible in the drive website.
      -   | File authorization is revoked when the user deauthorizes the app.
      -   \ "drive.file"
      -   / Allows read and write access to the Application Data folder.
      - 4 | This is not visible in the drive website.
      -   \ "drive.appfolder"
      -   / Allows read-only access to file metadata but
      - 5 | does not allow any access to read or download file content.
      -   \ "drive.metadata.readonly"
      -scope> 1
      -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
      -service_account_file>
      -Remote config
      -Use web browser to automatically authenticate rclone with remote?
      - * Say Y if the machine running rclone has a web browser you can use
      - * Say N if running rclone on a (remote) machine without web browser access
      -If not sure try Y. If Y failed, try N.
      -y) Yes
      -n) No
      -y/n> y
      -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
      -Log in and authorize rclone for access
      -Waiting for code...
      -Got code
      -Configure this as a Shared Drive (Team Drive)?
      -y) Yes
      -n) No
      -y/n> n
      ---------------------
      -[remote]
      -client_id = 
      -client_secret = 
      -scope = drive
      -root_folder_id = 
      -service_account_file =
      -token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      See the remote setup docs for how to set it up on a machine with no Internet browser available.

      -

      Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

      -

      You can then use it like this,

      -

      List directories in top level of your drive

      -
      rclone lsd remote:
      -

      List all the files in your drive

      -
      rclone ls remote:
      -

      To copy a local directory to a drive directory called backup

      -
      rclone copy /home/source remote:backup
      -

      Scopes

      -

      Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. The scopes are defined here.

      -

      The scope are

      -

      drive

      -

      This is the default scope and allows full access to all files, except for the Application Data Folder (see below).

      -

      Choose this one if you aren't sure.

      -

      drive.readonly

      -

      This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted.

      -

      drive.file

      -

      With this scope rclone can read/view/modify only those files and folders it creates.

      -

      So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone.

      -

      This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone.

      -

      Files created with this scope are visible in the web interface.

      -

      drive.appfolder

      -

      This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either.

      -

      drive.metadata.readonly

      -

      This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.

      -

      Root folder ID

      -

      This option has been moved to the advanced section. You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.

      -

      Normally you will leave this blank and rclone will determine the correct root to use itself.

      -

      However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).

      -

      In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.

      -

      So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.

      -

      NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.

      -

      There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!

      -

      Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.

      -

      Service Account support

      -

      You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      -

      Use case - Google Apps/G-suite account and individual Drive

      -

      Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.

      -

      There's a few steps we need to go through to accomplish this:

      -
      1. Create a service account for example.com
      -
        -
      • To create a service account and obtain its credentials, go to the Google Developer Console.
      • -
      • You must have a project - create one if you don't.
      • -
      • Then go to "IAM & admin" -> "Service Accounts".
      • -
      • Use the "Create Service Account" button. Fill in "Service account name" and "Service account ID" with something that identifies your client.
      • -
      • Select "Create And Continue". Step 2 and 3 are optional.
      • -
      • These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button.
      • -
      -
      2. Allowing API access to example.com Google Drive
      -
        -
      • Go to example.com's admin console
      • -
      • Go into "Security" (or use the search bar)
      • -
      • Select "Show more" and then "Advanced settings"
      • -
      • Select "Manage API client access" in the "Authentication" section
      • -
      • In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string.
      • -
      • In the next field, "One or More API Scopes", enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.
      • -
      -
      3. Configure rclone, assuming a new install
      -
      rclone config
      +Properties:
       
      -n/s/q> n         # New
      -name>gdrive      # Gdrive is an example name
      -Storage>         # Select the number shown for Google Drive
      -client_id>       # Can be left blank
      -client_secret>   # Can be left blank
      -scope>           # Select your scope, 1 for example
      -root_folder_id>  # Can be left blank
      -service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
      -y/n>             # Auto config, n
      +- Config:      permanent_token
      +- Env Var:     RCLONE_FILEFABRIC_PERMANENT_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to filefabric (Enterprise File Fabric).
      +
      +#### --filefabric-token
      +
      +Session Token.
      +
      +This is a session token which rclone caches in the config file. It is
      +usually valid for 1 hour.
      +
      +Don't set this value - rclone will set it automatically.
      +
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_FILEFABRIC_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --filefabric-token-expiry
      +
      +Token expiry time.
      +
      +Don't set this value - rclone will set it automatically.
      +
      +
      +Properties:
      +
      +- Config:      token_expiry
      +- Env Var:     RCLONE_FILEFABRIC_TOKEN_EXPIRY
      +- Type:        string
      +- Required:    false
      +
      +#### --filefabric-version
      +
      +Version read from the file fabric.
      +
      +Don't set this value - rclone will set it automatically.
      +
      +
      +Properties:
      +
      +- Config:      version
      +- Env Var:     RCLONE_FILEFABRIC_VERSION
      +- Type:        string
      +- Required:    false
      +
      +#### --filefabric-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_FILEFABRIC_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,Del,Ctl,InvalidUtf8,Dot
      +
      +
      +
      +#  FTP
      +
      +FTP is the File Transfer Protocol. Rclone FTP support is provided using the
      +[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
      +package.
      +
      +[Limitations of Rclone's FTP backend](#limitations)
      +
      +Paths are specified as `remote:path`. If the path does not begin with
      +a `/` it is relative to the home directory of the user.  An empty path
      +`remote:` refers to the user's home directory.
      +
      +## Configuration
      +
      +To create an FTP configuration named `remote`, run
      +
      +    rclone config
      +
      +Rclone config guides you through an interactive setup process. A minimal
      +rclone FTP remote definition only requires host, username and password.
      +For an anonymous FTP server, see [below](#anonymous-ftp).
       
      -
      4. Verify that it's working
      +

      No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / FTP  "ftp" [snip] Storage> ftp ** See help for ftp backend at: https://rclone.org/ftp/ **

      +

      FTP host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to ftp.example.com  "ftp.example.com" host> ftp.example.com FTP username Enter a string value. Press Enter for the default ("$USER"). user> FTP port number Enter a signed integer. Press Enter for the default (21). port> FTP password y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Use FTP over TLS (Implicit) Enter a boolean value (true or false). Press Enter for the default ("false"). tls> Use FTP over TLS (Explicit) Enter a boolean value (true or false). Press Enter for the default ("false"). explicit_tls> Remote config -------------------- [remote] type = ftp host = ftp.example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +To see all directories in the home directory of `remote`
      +
      +    rclone lsd remote:
      +
      +Make a new directory
      +
      +    rclone mkdir remote:path/to/directory
      +
      +List the contents of a directory
      +
      +    rclone ls remote:path/to/directory
      +
      +Sync `/home/local/directory` to the remote directory, deleting any
      +excess files in the directory.
      +
      +    rclone sync --interactive /home/local/directory remote:directory
      +
      +### Anonymous FTP
      +
      +When connecting to a FTP server that allows anonymous login, you can use the
      +special "anonymous" username. Traditionally, this user account accepts any
      +string as a password, although it is common to use either the password
      +"anonymous" or "guest". Some servers require the use of a valid e-mail
      +address as password.
      +
      +Using [on-the-fly](#backend-path-to-dir) or
      +[connection string](https://rclone.org/docs/#connection-strings) remotes makes it easy to access
      +such servers, without requiring any configuration in advance. The following
      +are examples of that:
      +
      +    rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
      +    rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
      +
      +The above examples work in Linux shells and in PowerShell, but not Windows
      +Command Prompt. They execute the [rclone obscure](https://rclone.org/commands/rclone_obscure/)
      +command to create a password string in the format required by the
      +[pass](#ftp-pass) option. The following examples are exactly the same, except use
      +an already obscured string representation of the same password "dummy", and
      +therefore works even in Windows Command Prompt:
      +
      +    rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
      +    rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
      +
      +### Implicit TLS
      +
      +Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
      +be enabled in the FTP backend config for the remote, or with
      +[`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and
      +can be set with [`--ftp-port`](#ftp-port).
      +
      +### Restricted filename characters
      +
      +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +the following characters are also replaced:
      +
      +File names cannot end with the following characters. Replacement is
      +limited to the last character in a file name:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| SP        | 0x20  | ␠           |
      +
      +Not all FTP servers can have all characters in file names, for example:
      +
      +| FTP Server| Forbidden characters |
      +| --------- |:--------------------:|
      +| proftpd   | `*`                  |
      +| pureftpd  | `\ [ ]`              |
      +
      +This backend's interactive configuration wizard provides a selection of
      +sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd.
      +Just hit a selection number when prompted.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to ftp (FTP).
      +
      +#### --ftp-host
      +
      +FTP host to connect to.
      +
      +E.g. "ftp.example.com".
      +
      +Properties:
      +
      +- Config:      host
      +- Env Var:     RCLONE_FTP_HOST
      +- Type:        string
      +- Required:    true
      +
      +#### --ftp-user
      +
      +FTP username.
      +
      +Properties:
      +
      +- Config:      user
      +- Env Var:     RCLONE_FTP_USER
      +- Type:        string
      +- Default:     "$USER"
      +
      +#### --ftp-port
      +
      +FTP port number.
      +
      +Properties:
      +
      +- Config:      port
      +- Env Var:     RCLONE_FTP_PORT
      +- Type:        int
      +- Default:     21
      +
      +#### --ftp-pass
      +
      +FTP password.
      +
      +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
      +
      +Properties:
      +
      +- Config:      pass
      +- Env Var:     RCLONE_FTP_PASS
      +- Type:        string
      +- Required:    false
      +
      +#### --ftp-tls
      +
      +Use Implicit FTPS (FTP over TLS).
      +
      +When using implicit FTP over TLS the client connects using TLS
      +right from the start which breaks compatibility with
      +non-TLS-aware servers. This is usually served over port 990 rather
      +than port 21. Cannot be used in combination with explicit FTPS.
      +
      +Properties:
      +
      +- Config:      tls
      +- Env Var:     RCLONE_FTP_TLS
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-explicit-tls
      +
      +Use Explicit FTPS (FTP over TLS).
      +
      +When using explicit FTP over TLS the client explicitly requests
      +security from the server in order to upgrade a plain text connection
      +to an encrypted one. Cannot be used in combination with implicit FTPS.
      +
      +Properties:
      +
      +- Config:      explicit_tls
      +- Env Var:     RCLONE_FTP_EXPLICIT_TLS
      +- Type:        bool
      +- Default:     false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to ftp (FTP).
      +
      +#### --ftp-concurrency
      +
      +Maximum number of FTP simultaneous connections, 0 for unlimited.
      +
      +Note that setting this is very likely to cause deadlocks so it should
      +be used with care.
      +
      +If you are doing a sync or copy then make sure concurrency is one more
      +than the sum of `--transfers` and `--checkers`.
      +
      +If you use `--check-first` then it just needs to be one more than the
      +maximum of `--checkers` and `--transfers`.
      +
      +So for `concurrency 3` you'd use `--checkers 2 --transfers 2
      +--check-first` or `--checkers 1 --transfers 1`.
      +
      +
      +
      +Properties:
      +
      +- Config:      concurrency
      +- Env Var:     RCLONE_FTP_CONCURRENCY
      +- Type:        int
      +- Default:     0
      +
      +#### --ftp-no-check-certificate
      +
      +Do not verify the TLS certificate of the server.
      +
      +Properties:
      +
      +- Config:      no_check_certificate
      +- Env Var:     RCLONE_FTP_NO_CHECK_CERTIFICATE
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-disable-epsv
      +
      +Disable using EPSV even if server advertises support.
      +
      +Properties:
      +
      +- Config:      disable_epsv
      +- Env Var:     RCLONE_FTP_DISABLE_EPSV
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-disable-mlsd
      +
      +Disable using MLSD even if server advertises support.
      +
      +Properties:
      +
      +- Config:      disable_mlsd
      +- Env Var:     RCLONE_FTP_DISABLE_MLSD
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-disable-utf8
      +
      +Disable using UTF-8 even if server advertises support.
      +
      +Properties:
      +
      +- Config:      disable_utf8
      +- Env Var:     RCLONE_FTP_DISABLE_UTF8
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-writing-mdtm
      +
      +Use MDTM to set modification time (VsFtpd quirk)
      +
      +Properties:
      +
      +- Config:      writing_mdtm
      +- Env Var:     RCLONE_FTP_WRITING_MDTM
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-force-list-hidden
      +
      +Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD.
      +
      +Properties:
      +
      +- Config:      force_list_hidden
      +- Env Var:     RCLONE_FTP_FORCE_LIST_HIDDEN
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-idle-timeout
      +
      +Max time before closing idle connections.
      +
      +If no connections have been returned to the connection pool in the time
      +given, rclone will empty the connection pool.
      +
      +Set to 0 to keep connections indefinitely.
      +
      +
      +Properties:
      +
      +- Config:      idle_timeout
      +- Env Var:     RCLONE_FTP_IDLE_TIMEOUT
      +- Type:        Duration
      +- Default:     1m0s
      +
      +#### --ftp-close-timeout
      +
      +Maximum time to wait for a response to close.
      +
      +Properties:
      +
      +- Config:      close_timeout
      +- Env Var:     RCLONE_FTP_CLOSE_TIMEOUT
      +- Type:        Duration
      +- Default:     1m0s
      +
      +#### --ftp-tls-cache-size
      +
      +Size of TLS session cache for all control and data connections.
      +
      +TLS cache allows to resume TLS sessions and reuse PSK between connections.
      +Increase if default size is not enough resulting in TLS resumption errors.
      +Enabled by default. Use 0 to disable.
      +
      +Properties:
      +
      +- Config:      tls_cache_size
      +- Env Var:     RCLONE_FTP_TLS_CACHE_SIZE
      +- Type:        int
      +- Default:     32
      +
      +#### --ftp-disable-tls13
      +
      +Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
      +
      +Properties:
      +
      +- Config:      disable_tls13
      +- Env Var:     RCLONE_FTP_DISABLE_TLS13
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-shut-timeout
      +
      +Maximum time to wait for data connection closing status.
      +
      +Properties:
      +
      +- Config:      shut_timeout
      +- Env Var:     RCLONE_FTP_SHUT_TIMEOUT
      +- Type:        Duration
      +- Default:     1m0s
      +
      +#### --ftp-ask-password
      +
      +Allow asking for FTP password when needed.
      +
      +If this is set and no password is supplied then rclone will ask for a password
      +
      +
      +Properties:
      +
      +- Config:      ask_password
      +- Env Var:     RCLONE_FTP_ASK_PASSWORD
      +- Type:        bool
      +- Default:     false
      +
      +#### --ftp-socks-proxy
      +
      +Socks 5 proxy host.
      +        
      +        Supports the format user:pass@host:port, user@host:port, host:port.
      +        
      +        Example:
      +        
      +            myUser:myPass@localhost:9005
      +        
      +
      +Properties:
      +
      +- Config:      socks_proxy
      +- Env Var:     RCLONE_FTP_SOCKS_PROXY
      +- Type:        string
      +- Required:    false
      +
      +#### --ftp-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_FTP_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,Del,Ctl,RightSpace,Dot
      +- Examples:
      +    - "Asterisk,Ctl,Dot,Slash"
      +        - ProFTPd can't handle '*' in file names
      +    - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket"
      +        - PureFTPd can't handle '[]' or '*' in file names
      +    - "Ctl,LeftPeriod,Slash"
      +        - VsFTPd can't handle file names starting with dot
      +
      +
      +
      +## Limitations
      +
      +FTP servers acting as rclone remotes must support `passive` mode.
      +The mode cannot be configured as `passive` is the only supported one.
      +Rclone's FTP implementation is not compatible with `active` mode
      +as [the library it uses doesn't support it](https://github.com/jlaffaye/ftp/issues/29).
      +This will likely never be supported due to security concerns.
      +
      +Rclone's FTP backend does not support any checksums but can compare
      +file sizes.
      +
      +`rclone about` is not supported by the FTP backend. Backends without
      +this capability cannot determine free space for an rclone mount or
      +use policy `mfs` (most free space) as a member of an rclone union
      +remote.
      +
      +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
      +
      +The implementation of : `--dump headers`,
      +`--dump bodies`, `--dump auth` for debugging isn't the same as
      +for rclone HTTP based backends - it has less fine grained control.
      +
      +`--timeout` isn't supported (but `--contimeout` is).
      +
      +`--bind` isn't supported.
      +
      +Rclone's FTP backend could support server-side move but does not
      +at present.
      +
      +The `ftp_proxy` environment variable is not currently supported.
      +
      +#### Modified time
      +
      +File modification time (timestamps) is supported to 1 second resolution
      +for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
      +The `VsFTPd` server has non-standard implementation of time related protocol
      +commands and needs a special configuration setting: `writing_mdtm = true`.
      +
      +Support for precise file time with other FTP servers varies depending on what
      +protocol extensions they advertise. If all the `MLSD`, `MDTM` and `MFTM`
      +extensions are present, rclone will use them together to provide precise time.
      +Otherwise the times you see on the FTP server through rclone are those of the
      +last file upload.
      +
      +You can use the following command to check whether rclone can use precise time
      +with your FTP server: `rclone backend features your_ftp_remote:` (the trailing
      +colon is important). Look for the number in the line tagged by `Precision`
      +designating the remote time precision expressed as nanoseconds. A value of
      +`1000000000` means that file time precision of 1 second is available.
      +A value of `3153600000000000000` (or another large number) means "unsupported".
      +
      +#  Google Cloud Storage
      +
      +Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
      +command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
      +
      +## Configuration
      +
      +The initial setup for google cloud storage involves getting a token from Google Cloud Storage
      +which you need to do in your browser.  `rclone config` walks you
      +through it.
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +
        +
      1. New remote
      2. +
      3. Delete remote
      4. +
      5. Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Cloud Storage (this is not Google Drive)  "google cloud storage" [snip] Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Project number optional - needed only for list/create/delete buckets - see your developer console. project_number> 12345678 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Access Control List for new objects. Choose a number from below, or type in your own value 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.  "authenticatedRead" 2 / Object owner gets OWNER access, and project team owners get OWNER access.  "bucketOwnerFullControl" 3 / Object owner gets OWNER access, and project team owners get READER access.  "bucketOwnerRead" 4 / Object owner gets OWNER access [default if left blank].  "private" 5 / Object owner gets OWNER access, and project team members get access according to their roles.  "projectPrivate" 6 / Object owner gets OWNER access, and all Users get READER access.  "publicRead" object_acl> 4 Access Control List for new buckets. Choose a number from below, or type in your own value 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.  "authenticatedRead" 2 / Project team owners get OWNER access [default if left blank].  "private" 3 / Project team members get access according to their roles.  "projectPrivate" 4 / Project team owners get OWNER access, and all Users get READER access.  "publicRead" 5 / Project team owners get OWNER access, and all Users get WRITER access.  "publicReadWrite" bucket_acl> 2 Location for the newly created buckets. Choose a number from below, or type in your own value 1 / Empty for default location (US).  "" 2 / Multi-regional location for Asia.  "asia" 3 / Multi-regional location for Europe.  "eu" 4 / Multi-regional location for United States.  "us" 5 / Taiwan.  "asia-east1" 6 / Tokyo.  "asia-northeast1" 7 / Singapore.  "asia-southeast1" 8 / Sydney.  "australia-southeast1" 9 / Belgium.  "europe-west1" 10 / London.  "europe-west2" 11 / Iowa.  "us-central1" 12 / South Carolina.  "us-east1" 13 / Northern Virginia.  "us-east4" 14 / Oregon.  "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. Choose a number from below, or type in your own value 1 / Default  "" 2 / Multi-regional storage class  "MULTI_REGIONAL" 3 / Regional storage class  "REGIONAL" 4 / Nearline storage class  "NEARLINE" 5 / Coldline storage class  "COLDLINE" 6 / Durable reduced availability storage class  "DURABLE_REDUCED_AVAILABILITY" storage_class> 5 Remote config Use web browser to automatically authenticate rclone with remote?
      6. +
        -
      • rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
      • -
      • The arguments do: -
          -
        • -v - verbose logging
        • -
        • --drive-impersonate foo@example.com - this is what does the magic, pretending to be user foo.
        • -
        • lsf - list files in a parsing friendly way
        • -
        • gdrive:backup - use the remote called gdrive, work in the folder named backup.
        • -
      • +
      • Say Y if the machine running rclone has a web browser you can use
      • +
      • Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N.
      -

      Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using --drive-impersonate, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the --drive-impersonate option, like this: rclone -v lsf gdrive:backup

      -

      Shared drives (team drives)

      -

      If you want to configure the remote to point to a Google Shared Drive (previously known as Team Drives) then answer y to the question Configure this as a Shared Drive (Team Drive)?.

      -

      This will fetch the list of Shared Drives from google and allow you to configure which one you want to use. You can also type in a Shared Drive ID if you prefer.

      -

      For example:

      -
      Configure this as a Shared Drive (Team Drive)?
      -y) Yes
      -n) No
      -y/n> y
      -Fetching Shared Drive list...
      -Choose a number from below, or type in your own value
      - 1 / Rclone Test
      -   \ "xxxxxxxxxxxxxxxxxxxx"
      - 2 / Rclone Test 2
      -   \ "yyyyyyyyyyyyyyyyyyyy"
      - 3 / Rclone Test 3
      -   \ "zzzzzzzzzzzzzzzzzzzz"
      -Enter a Shared Drive ID> 1
      ---------------------
      -[remote]
      -client_id =
      -client_secret =
      -token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
      -team_drive = xxxxxxxxxxxxxxxxxxxx
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      --fast-list

      -

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      -

      It does this by combining multiple list calls into a single API request.

      -

      This works by combining many '%s' in parents filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List function:

      -
      trashed=false and 'a' in parents
      -trashed=false and 'b' in parents
      -trashed=false and 'c' in parents
      -

      These can now be combined into a single request:

      -
      trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
      -

      The implementation of ListR will put up to 50 parents filters into one request. It will use the --checkers value to specify the number of requests to run in parallel.

      -

      In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives:

      -
      rclone lsjson -vv -R --checkers=6 gdrive:folder
      -

      small folder (220 directories, 700 files):

      -
        -
      • without --fast-list: 38s
      • -
      • with --fast-list: 10s
      • -
      -

      large folder (10600 directories, 39000 files):

      -
        -
      • without --fast-list: 22:05 min
      • -
      • with --fast-list: 58s
      • -
      -

      Modified time

      -

      Google drive stores modification times accurate to 1 ms.

      -

      Restricted filename characters

      -

      Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings.

      -

      In contrast to other backends, / can also be used in names and . or .. are valid names.

      -

      Revisions

      -

      Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.

      -

      Revisions follow the standard google policy which at time of writing was

      -
        -
      • They are deleted after 30 days or 100 revisions (whatever comes first).
      • -
      • They do not count towards a user storage quota.
      • -
      -

      Deleting files

      -

      By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

      -

      Shortcuts

      -

      In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API). These will (by September 2020) replace the ability for files or folders to be in multiple folders at once.

      -

      Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (e.g. the inode in unix terms) so they don't break if the source is renamed or moved about.

      -

      By default rclone treats these as follows.

      -

      For shortcuts pointing to files:

      -
        -
      • When listing a file shortcut appears as the destination file.
      • -
      • When downloading the contents of the destination file is downloaded.
      • -
      • When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut.
      • -
      • When server-side moving (renaming) the shortcut is renamed, not the destination file.
      • -
      • When server-side copying the shortcut is copied, not the contents of the shortcut. (unless --drive-copy-shortcut-content is in use in which case the contents of the shortcut gets copied).
      • -
      • When deleting the shortcut is deleted not the linked file.
      • -
      • When setting the modification time, the modification time of the linked file will be set.
      • -
      -

      For shortcuts pointing to folders:

      -
        -
      • When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders)
      • -
      • When downloading the contents of the linked folder and sub contents are downloaded
      • -
      • When uploading to a shortcut folder the file will be placed in the linked folder
      • -
      • When server-side moving (renaming) the shortcut is renamed, not the destination folder
      • -
      • When server-side copying the contents of the linked folder is copied, not the shortcut.
      • -
      • When deleting with rclone rmdir or rclone purge the shortcut is deleted not the linked folder.
      • -
      • NB When deleting with rclone remove or rclone mount the contents of the linked folder will be deleted.
      • -
      -

      The rclone backend command can be used to create shortcuts.

      -

      Shortcuts can be completely ignored with the --drive-skip-shortcuts flag or the corresponding skip_shortcuts configuration setting.

      -

      Emptying trash

      -

      If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

      -

      Note that Google Drive takes some time (minutes to days) to empty the trash even though the command returns within a few seconds. No output is echoed, so there will be no confirmation even using -v or -vv.

      -

      Quota information

      -

      To view your current quota you can use the rclone about remote: command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments.

      -

      Import/Export of google documents

      -

      Google documents can be exported from and uploaded to Google Drive.

      -

      When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats setting. By default the export formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.

      -

      When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.

      -

      If you prefer an archive copy then you might use --drive-export-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp.

      -

      Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

      -

      When importing files into Google Drive, rclone will convert all files with an extension in --drive-import-formats to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.

      -

      The conversion must result in a file with the same extension when the --drive-export-formats rules are applied to the uploaded document.

      -

      Here are some examples for allowed and prohibited conversions.

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      export-formatsimport-formatsUpload ExtDocument ExtAllowed
      odtodtodtodtYes
      odtdocx,odtodtodtYes
      docxdocxdocxYes
      odtodtdocxNo
      odt,docxdocx,odtdocxodtNo
      docx,odtdocx,odtdocxdocxYes
      docx,odtdocx,odtodtdocxNo
      -

      This limitation can be disabled by specifying --drive-allow-import-name-change. When using this flag, rclone can convert multiple files types resulting in the same document type at once, e.g. with --drive-import-formats docx,odt,txt, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes.

      -

      Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries.

      -

      This list can be changed by Google Drive at any time and might not represent the currently available conversions.

      - +
        +
      1. Yes
      2. +
      3. No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = google cloud storage client_id = client_secret = token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} project_number = 12345678 object_acl = private bucket_acl = private --------------------
      4. +
      5. Yes this is OK
      6. +
      7. Edit this remote
      8. +
      9. Delete this remote y/e/d> y
      10. +
      +
      
      +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
      +machine with no Internet browser available.
      +
      +Note that rclone runs a webserver on your local machine to collect the
      +token as returned from Google if using web browser to automatically 
      +authenticate. This only
      +runs from the moment it opens your browser to the moment you get back
      +the verification code.  This is on `http://127.0.0.1:53682/` and this
      +it may require you to unblock it temporarily if you are running a host
      +firewall, or use manual mode.
      +
      +This remote is called `remote` and can now be used like this
      +
      +See all the buckets in your project
      +
      +    rclone lsd remote:
      +
      +Make a new bucket
      +
      +    rclone mkdir remote:bucket
      +
      +List the contents of a bucket
      +
      +    rclone ls remote:bucket
      +
      +Sync `/home/local/directory` to the remote bucket, deleting any excess
      +files in the bucket.
      +
      +    rclone sync --interactive /home/local/directory remote:bucket
      +
      +### Service Account support
      +
      +You can set up rclone with Google Cloud Storage in an unattended mode,
      +i.e. not tied to a specific end-user Google account. This is useful
      +when you want to synchronise files onto machines that don't have
      +actively logged-in users, for example build machines.
      +
      +To get credentials for Google Cloud Platform
      +[IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts),
      +please head to the
      +[Service Account](https://console.cloud.google.com/permissions/serviceaccounts)
      +section of the Google Developer Console. Service Accounts behave just
      +like normal `User` permissions in
      +[Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control),
      +so you can limit their access (e.g. make them read only). After
      +creating an account, a JSON file containing the Service Account's
      +credentials will be downloaded onto your machines. These credentials
      +are what rclone will use for authentication.
      +
      +To use a Service Account instead of OAuth2 token flow, enter the path
      +to your Service Account credentials at the `service_account_file`
      +prompt and rclone won't use the browser based authentication
      +flow. If you'd rather stuff the contents of the credentials file into
      +the rclone config file, you can set `service_account_credentials` with
      +the actual contents of the file instead, or set the equivalent
      +environment variable.
      +
      +### Anonymous Access
      +
      +For downloads of objects that permit public access you can configure rclone
      +to use anonymous access by setting `anonymous` to `true`.
      +With unauthorized access you can't write or create files but only read or list
      +those buckets and objects that have public read access.
      +
      +### Application Default Credentials
      +
      +If no other source of credentials is provided, rclone will fall back
      +to
      +[Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials)
      +this is useful both when you already have configured authentication
      +for your developer account, or in production when running on a google
      +compute host. Note that if running in docker, you may need to run
      +additional commands on your google compute machine -
      +[see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper).
      +
      +Note that in the case application default credentials are used, there
      +is no need to explicitly configure a project number.
      +
      +### --fast-list
      +
      +This remote supports `--fast-list` which allows you to use fewer
      +transactions in exchange for more memory. See the [rclone
      +docs](https://rclone.org/docs/#fast-list) for more details.
      +
      +### Custom upload headers
      +
      +You can set custom upload headers with the `--header-upload`
      +flag. Google Cloud Storage supports the headers as described in the
      +[working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata)
      +
      +- Cache-Control
      +- Content-Disposition
      +- Content-Encoding
      +- Content-Language
      +- Content-Type
      +- X-Goog-Storage-Class
      +- X-Goog-Meta-
      +
      +Eg `--header-upload "Content-Type text/potato"`
      +
      +Note that the last of these is for setting custom metadata in the form
      +`--header-upload "x-goog-meta-key: value"`
      +
      +### Modification time
      +
      +Google Cloud Storage stores md5sum natively.
      +Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
      +with one-second precision as `goog-reserved-file-mtime` in file metadata.
      +
      +To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries.
      +`mtime` uses RFC3339 format with one-nanosecond precision.
      +`goog-reserved-file-mtime` uses Unix timestamp format with one-second precision.
      +To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time.
      +
      +Note that rclone's default modify window is 1ns.
      +Files uploaded by gsutil only contain timestamps with one-second precision.
      +If you use rclone to sync files previously uploaded by gsutil,
      +rclone will attempt to update modification time for all these files.
      +To avoid these possibly unnecessary updates, use `--modify-window 1s`.
      +
      +### Restricted filename characters
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| NUL       | 0x00  | ␀           |
      +| LF        | 0x0A  | ␊           |
      +| CR        | 0x0D  | ␍           |
      +| /         | 0x2F  | /          |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
      +
      +#### --gcs-client-id
      +
      +OAuth Client Id.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_id
      +- Env Var:     RCLONE_GCS_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_GCS_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-project-number
      +
      +Project number.
      +
      +Optional - needed only for list/create/delete buckets - see your developer console.
      +
      +Properties:
      +
      +- Config:      project_number
      +- Env Var:     RCLONE_GCS_PROJECT_NUMBER
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-user-project
      +
      +User project.
      +
      +Optional - needed only for requester pays.
      +
      +Properties:
      +
      +- Config:      user_project
      +- Env Var:     RCLONE_GCS_USER_PROJECT
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-service-account-file
      +
      +Service Account Credentials JSON file path.
      +
      +Leave blank normally.
      +Needed only if you want use SA instead of interactive login.
      +
      +Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
      +
      +Properties:
      +
      +- Config:      service_account_file
      +- Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_FILE
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-service-account-credentials
      +
      +Service Account Credentials JSON blob.
      +
      +Leave blank normally.
      +Needed only if you want use SA instead of interactive login.
      +
      +Properties:
      +
      +- Config:      service_account_credentials
      +- Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-anonymous
      +
      +Access public buckets and objects without credentials.
      +
      +Set to 'true' if you just want to download files and don't configure credentials.
      +
      +Properties:
      +
      +- Config:      anonymous
      +- Env Var:     RCLONE_GCS_ANONYMOUS
      +- Type:        bool
      +- Default:     false
      +
      +#### --gcs-object-acl
      +
      +Access Control List for new objects.
      +
      +Properties:
      +
      +- Config:      object_acl
      +- Env Var:     RCLONE_GCS_OBJECT_ACL
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "authenticatedRead"
      +        - Object owner gets OWNER access.
      +        - All Authenticated Users get READER access.
      +    - "bucketOwnerFullControl"
      +        - Object owner gets OWNER access.
      +        - Project team owners get OWNER access.
      +    - "bucketOwnerRead"
      +        - Object owner gets OWNER access.
      +        - Project team owners get READER access.
      +    - "private"
      +        - Object owner gets OWNER access.
      +        - Default if left blank.
      +    - "projectPrivate"
      +        - Object owner gets OWNER access.
      +        - Project team members get access according to their roles.
      +    - "publicRead"
      +        - Object owner gets OWNER access.
      +        - All Users get READER access.
      +
      +#### --gcs-bucket-acl
      +
      +Access Control List for new buckets.
      +
      +Properties:
      +
      +- Config:      bucket_acl
      +- Env Var:     RCLONE_GCS_BUCKET_ACL
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "authenticatedRead"
      +        - Project team owners get OWNER access.
      +        - All Authenticated Users get READER access.
      +    - "private"
      +        - Project team owners get OWNER access.
      +        - Default if left blank.
      +    - "projectPrivate"
      +        - Project team members get access according to their roles.
      +    - "publicRead"
      +        - Project team owners get OWNER access.
      +        - All Users get READER access.
      +    - "publicReadWrite"
      +        - Project team owners get OWNER access.
      +        - All Users get WRITER access.
      +
      +#### --gcs-bucket-policy-only
      +
      +Access checks should use bucket-level IAM policies.
      +
      +If you want to upload objects to a bucket with Bucket Policy Only set
      +then you will need to set this.
      +
      +When it is set, rclone:
      +
      +- ignores ACLs set on buckets
      +- ignores ACLs set on objects
      +- creates buckets with Bucket Policy Only set
      +
      +Docs: https://cloud.google.com/storage/docs/bucket-policy-only
      +
      +
      +Properties:
      +
      +- Config:      bucket_policy_only
      +- Env Var:     RCLONE_GCS_BUCKET_POLICY_ONLY
      +- Type:        bool
      +- Default:     false
      +
      +#### --gcs-location
      +
      +Location for the newly created buckets.
      +
      +Properties:
      +
      +- Config:      location
      +- Env Var:     RCLONE_GCS_LOCATION
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - ""
      +        - Empty for default location (US)
      +    - "asia"
      +        - Multi-regional location for Asia
      +    - "eu"
      +        - Multi-regional location for Europe
      +    - "us"
      +        - Multi-regional location for United States
      +    - "asia-east1"
      +        - Taiwan
      +    - "asia-east2"
      +        - Hong Kong
      +    - "asia-northeast1"
      +        - Tokyo
      +    - "asia-northeast2"
      +        - Osaka
      +    - "asia-northeast3"
      +        - Seoul
      +    - "asia-south1"
      +        - Mumbai
      +    - "asia-south2"
      +        - Delhi
      +    - "asia-southeast1"
      +        - Singapore
      +    - "asia-southeast2"
      +        - Jakarta
      +    - "australia-southeast1"
      +        - Sydney
      +    - "australia-southeast2"
      +        - Melbourne
      +    - "europe-north1"
      +        - Finland
      +    - "europe-west1"
      +        - Belgium
      +    - "europe-west2"
      +        - London
      +    - "europe-west3"
      +        - Frankfurt
      +    - "europe-west4"
      +        - Netherlands
      +    - "europe-west6"
      +        - Zürich
      +    - "europe-central2"
      +        - Warsaw
      +    - "us-central1"
      +        - Iowa
      +    - "us-east1"
      +        - South Carolina
      +    - "us-east4"
      +        - Northern Virginia
      +    - "us-west1"
      +        - Oregon
      +    - "us-west2"
      +        - California
      +    - "us-west3"
      +        - Salt Lake City
      +    - "us-west4"
      +        - Las Vegas
      +    - "northamerica-northeast1"
      +        - Montréal
      +    - "northamerica-northeast2"
      +        - Toronto
      +    - "southamerica-east1"
      +        - São Paulo
      +    - "southamerica-west1"
      +        - Santiago
      +    - "asia1"
      +        - Dual region: asia-northeast1 and asia-northeast2.
      +    - "eur4"
      +        - Dual region: europe-north1 and europe-west4.
      +    - "nam4"
      +        - Dual region: us-central1 and us-east1.
      +
      +#### --gcs-storage-class
      +
      +The storage class to use when storing objects in Google Cloud Storage.
      +
      +Properties:
      +
      +- Config:      storage_class
      +- Env Var:     RCLONE_GCS_STORAGE_CLASS
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - ""
      +        - Default
      +    - "MULTI_REGIONAL"
      +        - Multi-regional storage class
      +    - "REGIONAL"
      +        - Regional storage class
      +    - "NEARLINE"
      +        - Nearline storage class
      +    - "COLDLINE"
      +        - Coldline storage class
      +    - "ARCHIVE"
      +        - Archive storage class
      +    - "DURABLE_REDUCED_AVAILABILITY"
      +        - Durable reduced availability storage class
      +
      +#### --gcs-env-auth
      +
      +Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).
      +
      +Only applies if service_account_file and service_account_credentials is blank.
      +
      +Properties:
      +
      +- Config:      env_auth
      +- Env Var:     RCLONE_GCS_ENV_AUTH
      +- Type:        bool
      +- Default:     false
      +- Examples:
      +    - "false"
      +        - Enter credentials in the next step.
      +    - "true"
      +        - Get GCP IAM credentials from the environment (env vars or IAM).
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
      +
      +#### --gcs-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_GCS_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_GCS_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_GCS_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-directory-markers
      +
      +Upload an empty object with a trailing slash when a new directory is created
      +
      +Empty folders are unsupported for bucket based remotes, this option creates an empty
      +object ending with "/", to persist the folder.
      +
      +
      +Properties:
      +
      +- Config:      directory_markers
      +- Env Var:     RCLONE_GCS_DIRECTORY_MARKERS
      +- Type:        bool
      +- Default:     false
      +
      +#### --gcs-no-check-bucket
      +
      +If set, don't attempt to check the bucket exists or create it.
      +
      +This can be useful when trying to minimise the number of transactions
      +rclone does if you know the bucket exists already.
      +
      +
      +Properties:
      +
      +- Config:      no_check_bucket
      +- Env Var:     RCLONE_GCS_NO_CHECK_BUCKET
      +- Type:        bool
      +- Default:     false
      +
      +#### --gcs-decompress
      +
      +If set this will decompress gzip encoded objects.
      +
      +It is possible to upload objects to GCS with "Content-Encoding: gzip"
      +set. Normally rclone will download these files as compressed objects.
      +
      +If this flag is set then rclone will decompress these files with
      +"Content-Encoding: gzip" as they are received. This means that rclone
      +can't check the size and hash but the file contents will be decompressed.
      +
      +
      +Properties:
      +
      +- Config:      decompress
      +- Env Var:     RCLONE_GCS_DECOMPRESS
      +- Type:        bool
      +- Default:     false
      +
      +#### --gcs-endpoint
      +
      +Endpoint for the service.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      endpoint
      +- Env Var:     RCLONE_GCS_ENDPOINT
      +- Type:        string
      +- Required:    false
      +
      +#### --gcs-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_GCS_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,CrLf,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +`rclone about` is not supported by the Google Cloud Storage backend. Backends without
      +this capability cannot determine free space for an rclone mount or
      +use policy `mfs` (most free space) as a member of an rclone union
      +remote.
      +
      +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
      +
      +#  Google Drive
      +
      +Paths are specified as `drive:path`
      +
      +Drive paths may be as deep as required, e.g. `drive:directory/subdirectory`.
      +
      +## Configuration
      +
      +The initial setup for drive involves getting a token from Google drive
      +which you need to do in your browser.  `rclone config` walks you
      +through it.
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Google Drive  "drive" [snip] Storage> drive Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. client_secret> Scope that rclone should use when requesting access from drive. Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder.  "drive" 2 / Read-only access to file metadata and file contents.  "drive.readonly" / Access to files created by rclone only. 3 | These are visible in the drive website. | File authorization is revoked when the user deauthorizes the app.  "drive.file" / Allows read and write access to the Application Data folder. 4 | This is not visible in the drive website.  "drive.appfolder" / Allows read-only access to file metadata but 5 | does not allow any access to read or download file content.  "drive.metadata.readonly" scope> 1 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Configure this as a Shared Drive (Team Drive)? y) Yes n) No y/n> n -------------------- [remote] client_id = client_secret = scope = drive root_folder_id = service_account_file = token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
      +machine with no Internet browser available.
      +
      +Note that rclone runs a webserver on your local machine to collect the
      +token as returned from Google if using web browser to automatically 
      +authenticate. This only
      +runs from the moment it opens your browser to the moment you get back
      +the verification code.  This is on `http://127.0.0.1:53682/` and it
      +may require you to unblock it temporarily if you are running a host
      +firewall, or use manual mode.
      +
      +You can then use it like this,
      +
      +List directories in top level of your drive
      +
      +    rclone lsd remote:
      +
      +List all the files in your drive
      +
      +    rclone ls remote:
      +
      +To copy a local directory to a drive directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Scopes
      +
      +Rclone allows you to select which scope you would like for rclone to
      +use.  This changes what type of token is granted to rclone.  [The
      +scopes are defined
      +here](https://developers.google.com/drive/v3/web/about-auth).
      +
      +The scope are
      +
      +#### drive
      +
      +This is the default scope and allows full access to all files, except
      +for the Application Data Folder (see below).
      +
      +Choose this one if you aren't sure.
      +
      +#### drive.readonly
      +
      +This allows read only access to all files.  Files may be listed and
      +downloaded but not uploaded, renamed or deleted.
      +
      +#### drive.file
      +
      +With this scope rclone can read/view/modify only those files and
      +folders it creates.
      +
      +So if you uploaded files to drive via the web interface (or any other
      +means) they will not be visible to rclone.
      +
      +This can be useful if you are using rclone to backup data and you want
      +to be sure confidential data on your drive is not visible to rclone.
      +
      +Files created with this scope are visible in the web interface.
      +
      +#### drive.appfolder
      +
      +This gives rclone its own private area to store files.  Rclone will
      +not be able to see any other files on your drive and you won't be able
      +to see rclone's files from the web interface either.
      +
      +#### drive.metadata.readonly
      +
      +This allows read only access to file names only.  It does not allow
      +rclone to download or upload data, or rename or delete files or
      +directories.
      +
      +### Root folder ID
      +
      +This option has been moved to the advanced section. You can set the `root_folder_id` for rclone.  This is the directory
      +(identified by its `Folder ID`) that rclone considers to be the root
      +of your drive.
      +
      +Normally you will leave this blank and rclone will determine the
      +correct root to use itself.
      +
      +However you can set this to restrict rclone to a specific folder
      +hierarchy or to access data within the "Computers" tab on the drive
      +web interface (where files from Google's Backup and Sync desktop
      +program go).
      +
      +In order to do this you will have to find the `Folder ID` of the
      +directory you wish rclone to display.  This will be the last segment
      +of the URL when you open the relevant folder in the drive web
      +interface.
      +
      +So if the folder you want rclone to use has a URL which looks like
      +`https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh`
      +in the browser, then you use `1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` as
      +the `root_folder_id` in the config.
      +
      +**NB** folders under the "Computers" tab seem to be read only (drive
      +gives a 500 error) when using rclone.
      +
      +There doesn't appear to be an API to discover the folder IDs of the
      +"Computers" tab - please contact us if you know otherwise!
      +
      +Note also that rclone can't access any data under the "Backups" tab on
      +the google drive web interface yet.
      +
      +### Service Account support
      +
      +You can set up rclone with Google Drive in an unattended mode,
      +i.e. not tied to a specific end-user Google account. This is useful
      +when you want to synchronise files onto machines that don't have
      +actively logged-in users, for example build machines.
      +
      +To use a Service Account instead of OAuth2 token flow, enter the path
      +to your Service Account credentials at the `service_account_file`
      +prompt during `rclone config` and rclone won't use the browser based
      +authentication flow. If you'd rather stuff the contents of the
      +credentials file into the rclone config file, you can set
      +`service_account_credentials` with the actual contents of the file
      +instead, or set the equivalent environment variable.
      +
      +#### Use case - Google Apps/G-suite account and individual Drive
      +
      +Let's say that you are the administrator of a Google Apps (old) or
      +G-suite account.
      +The goal is to store data on an individual's Drive account, who IS
      +a member of the domain.
      +We'll call the domain **example.com**, and the user
      +**foo@example.com**.
      +
      +There's a few steps we need to go through to accomplish this:
      +
      +##### 1. Create a service account for example.com
      +  - To create a service account and obtain its credentials, go to the
      +[Google Developer Console](https://console.developers.google.com).
      +  - You must have a project - create one if you don't.
      +  - Then go to "IAM & admin" -> "Service Accounts".
      +  - Use the "Create Service Account" button. Fill in "Service account name"
      +and "Service account ID" with something that identifies your client.
      +  - Select "Create And Continue". Step 2 and 3 are optional.
      +  - These credentials are what rclone will use for authentication.
      +If you ever need to remove access, press the "Delete service
      +account key" button.
      +
      +##### 2. Allowing API access to example.com Google Drive
      +  - Go to example.com's admin console
      +  - Go into "Security" (or use the search bar)
      +  - Select "Show more" and then "Advanced settings"
      +  - Select "Manage API client access" in the "Authentication" section
      +  - In the "Client Name" field enter the service account's
      +"Client ID" - this can be found in the Developer Console under
      +"IAM & Admin" -> "Service Accounts", then "View Client ID" for
      +the newly created service account.
      +It is a ~21 character numerical string.
      +  - In the next field, "One or More API Scopes", enter
      +`https://www.googleapis.com/auth/drive`
      +to grant access to Google Drive specifically.
      +
      +##### 3. Configure rclone, assuming a new install
      +
      +

      rclone config

      +

      n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select the number shown for Google Drive client_id> # Can be left blank client_secret> # Can be left blank scope> # Select your scope, 1 for example root_folder_id> # Can be left blank service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # Auto config, n

      +
      
      +##### 4. Verify that it's working
      +  - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup`
      +  - The arguments do:
      +    - `-v` - verbose logging
      +    - `--drive-impersonate foo@example.com` - this is what does
      +the magic, pretending to be user foo.
      +    - `lsf` - list files in a parsing friendly way
      +    - `gdrive:backup` - use the remote called gdrive, work in
      +the folder named backup.
      +
      +Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead:
      +  - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1
      +  - use rclone without specifying the `--drive-impersonate` option, like this:
      +        `rclone -v lsf gdrive:backup`
      +
      +
      +### Shared drives (team drives)
      +
      +If you want to configure the remote to point to a Google Shared Drive
      +(previously known as Team Drives) then answer `y` to the question
      +`Configure this as a Shared Drive (Team Drive)?`.
      +
      +This will fetch the list of Shared Drives from google and allow you to
      +configure which one you want to use. You can also type in a Shared
      +Drive ID if you prefer.
      +
      +For example:
      +
      +

      Configure this as a Shared Drive (Team Drive)? y) Yes n) No y/n> y Fetching Shared Drive list... Choose a number from below, or type in your own value 1 / Rclone Test  "xxxxxxxxxxxxxxxxxxxx" 2 / Rclone Test 2  "yyyyyyyyyyyyyyyyyyyy" 3 / Rclone Test 3  "zzzzzzzzzzzzzzzzzzzz" Enter a Shared Drive ID> 1 -------------------- [remote] client_id = client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +### --fast-list
      +
      +This remote supports `--fast-list` which allows you to use fewer
      +transactions in exchange for more memory. See the [rclone
      +docs](https://rclone.org/docs/#fast-list) for more details.
      +
      +It does this by combining multiple `list` calls into a single API request.
      +
      +This works by combining many `'%s' in parents` filters into one expression.
      +To list the contents of directories a, b and c, the following requests will be send by the regular `List` function:
      +

      trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents

      +
      These can now be combined into a single request:
      +

      trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)

      +
      
      +The implementation of `ListR` will put up to 50 `parents` filters into one request.
      +It will  use the `--checkers` value to specify the number of requests to run in parallel.
      +
      +In tests, these batch requests were up to 20x faster than the regular method.
      +Running the following command against different sized folders gives:
      +

      rclone lsjson -vv -R --checkers=6 gdrive:folder

      +
      
      +small folder (220 directories, 700 files):
      +
      +- without `--fast-list`: 38s
      +- with `--fast-list`: 10s
      +
      +large folder (10600 directories, 39000 files):
      +
      +- without `--fast-list`: 22:05 min
      +- with `--fast-list`: 58s
      +
      +### Modified time
      +
      +Google drive stores modification times accurate to 1 ms.
      +
      +### Restricted filename characters
      +
      +Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +In contrast to other backends, `/` can also be used in names and `.`
      +or `..` are valid names.
      +
      +### Revisions
      +
      +Google drive stores revisions of files.  When you upload a change to
      +an existing file to google drive using rclone it will create a new
      +revision of that file.
      +
      +Revisions follow the standard google policy which at time of writing
      +was
      +
      +  * They are deleted after 30 days or 100 revisions (whatever comes first).
      +  * They do not count towards a user storage quota.
      +
      +### Deleting files
      +
      +By default rclone will send all files to the trash when deleting
      +files.  If deleting them permanently is required then use the
      +`--drive-use-trash=false` flag, or set the equivalent environment
      +variable.
      +
      +### Shortcuts
      +
      +In March 2020 Google introduced a new feature in Google Drive called
      +[drive shortcuts](https://support.google.com/drive/answer/9700156)
      +([API](https://developers.google.com/drive/api/v3/shortcuts)). These
      +will (by September 2020) [replace the ability for files or folders to
      +be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models).
      +
      +Shortcuts are files that link to other files on Google Drive somewhat
      +like a symlink in unix, except they point to the underlying file data
      +(e.g. the inode in unix terms) so they don't break if the source is
      +renamed or moved about.
      +
      +By default rclone treats these as follows.
      +
      +For shortcuts pointing to files:
      +
      +- When listing a file shortcut appears as the destination file.
      +- When downloading the contents of the destination file is downloaded.
      +- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut.
      +- When server-side moving (renaming) the shortcut is renamed, not the destination file.
      +- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless `--drive-copy-shortcut-content` is in use in which case the contents of the shortcut gets copied).
      +- When deleting the shortcut is deleted not the linked file.
      +- When setting the modification time, the modification time of the linked file will be set.
      +
      +For shortcuts pointing to folders:
      +
      +- When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders)
      +- When downloading the contents of the linked folder and sub contents are downloaded
      +- When uploading to a shortcut folder the file will be placed in the linked folder
      +- When server-side moving (renaming) the shortcut is renamed, not the destination folder
      +- When server-side copying the contents of the linked folder is copied, not the shortcut.
      +- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder.
      +- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted.
      +
      +The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts.  
      +
      +Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag
      +or the corresponding `skip_shortcuts` configuration setting.
      +
      +### Emptying trash
      +
      +If you wish to empty your trash you can use the `rclone cleanup remote:`
      +command which will permanently delete all your trashed files. This command
      +does not take any path arguments.
      +
      +Note that Google Drive takes some time (minutes to days) to empty the
      +trash even though the command returns within a few seconds.  No output
      +is echoed, so there will be no confirmation even using -v or -vv.
      +
      +### Quota information
      +
      +To view your current quota you can use the `rclone about remote:`
      +command which will display your usage limit (quota), the usage in Google
      +Drive, the size of all files in the Trash and the space used by other
      +Google services such as Gmail. This command does not take any path
      +arguments.
      +
      +#### Import/Export of google documents
      +
      +Google documents can be exported from and uploaded to Google Drive.
      +
      +When rclone downloads a Google doc it chooses a format to download
      +depending upon the `--drive-export-formats` setting.
      +By default the export formats are `docx,xlsx,pptx,svg` which are a
      +sensible default for an editable document.
      +
      +When choosing a format, rclone runs down the list provided in order
      +and chooses the first file format the doc can be exported as from the
      +list. If the file can't be exported to a format on the formats list,
      +then rclone will choose a format from the default list.
      +
      +If you prefer an archive copy then you might use `--drive-export-formats
      +pdf`, or if you prefer openoffice/libreoffice formats you might use
      +`--drive-export-formats ods,odt,odp`.
      +
      +Note that rclone adds the extension to the google doc, so if it is
      +called `My Spreadsheet` on google docs, it will be exported as `My
      +Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc.
      +
      +When importing files into Google Drive, rclone will convert all
      +files with an extension in `--drive-import-formats` to their
      +associated document type.
      +rclone will not convert any files by default, since the conversion
      +is lossy process.
      +
      +The conversion must result in a file with the same extension when
      +the `--drive-export-formats` rules are applied to the uploaded document.
      +
      +Here are some examples for allowed and prohibited conversions.
      +
      +| export-formats | import-formats | Upload Ext | Document Ext | Allowed |
      +| -------------- | -------------- | ---------- | ------------ | ------- |
      +| odt | odt | odt | odt | Yes |
      +| odt | docx,odt | odt | odt | Yes |
      +|  | docx | docx | docx | Yes |
      +|  | odt | odt | docx | No |
      +| odt,docx | docx,odt | docx | odt | No |
      +| docx,odt | docx,odt | docx | docx | Yes |
      +| docx,odt | docx,odt | odt | docx | No |
      +
      +This limitation can be disabled by specifying `--drive-allow-import-name-change`.
      +When using this flag, rclone can convert multiple files types resulting
      +in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`,
      +all files having these extension would result in a document represented as a docx file.
      +This brings the additional risk of overwriting a document, if multiple files
      +have the same stem. Many rclone operations will not handle this name change
      +in any way. They assume an equal name when copying files and might copy the
      +file again or delete them when the name changes. 
      +
      +Here are the possible export extensions with their corresponding mime types.
      +Most of these can also be used for importing, but there more that are not
      +listed here. Some of these additional ones might only be available when
      +the operating system provides the correct MIME type entries.
      +
      +This list can be changed by Google Drive at any time and might not
      +represent the currently available conversions.
      +
      +| Extension | Mime Type | Description |
      +| --------- |-----------| ------------|
      +| bmp  | image/bmp | Windows Bitmap format |
      +| csv  | text/csv | Standard CSV format for Spreadsheets |
      +| doc  | application/msword | Classic Word file |
      +| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document |
      +| epub | application/epub+zip | E-book format |
      +| html | text/html | An HTML Document |
      +| jpg  | image/jpeg | A JPEG Image File |
      +| json | application/vnd.google-apps.script+json | JSON Text Format for Google Apps scripts |
      +| odp  | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
      +| ods  | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
      +| ods  | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
      +| odt  | application/vnd.oasis.opendocument.text | Openoffice Document |
      +| pdf  | application/pdf | Adobe PDF Format |
      +| pjpeg | image/pjpeg | Progressive JPEG Image |
      +| png  | image/png | PNG Image Format|
      +| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint |
      +| rtf  | application/rtf | Rich Text Format |
      +| svg  | image/svg+xml | Scalable Vector Graphics Format |
      +| tsv  | text/tab-separated-values | Standard TSV format for spreadsheets |
      +| txt  | text/plain | Plain Text |
      +| wmf  | application/x-msmetafile | Windows Meta File |
      +| xls  | application/vnd.ms-excel | Classic Excel file |
      +| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
      +| zip  | application/zip | A ZIP file of HTML, Images CSS |
      +
      +Google documents can also be exported as link files. These files will
      +open a browser window for the Google Docs website of that document
      +when opened. The link file extension has to be specified as a
      +`--drive-export-formats` parameter. They will match all available
      +Google Documents.
      +
      +| Extension | Description | OS Support |
      +| --------- | ----------- | ---------- |
      +| desktop | freedesktop.org specified desktop entry | Linux |
      +| link.html | An HTML Document with a redirect | All |
      +| url | INI style link file | macOS, Windows |
      +| webloc | macOS specific XML format | macOS |
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to drive (Google Drive).
      +
      +#### --drive-client-id
      +
      +Google Application Client Id
      +Setting your own is recommended.
      +See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
      +If you leave this blank, it will use an internal key which is low performance.
      +
      +Properties:
      +
      +- Config:      client_id
      +- Env Var:     RCLONE_DRIVE_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_DRIVE_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-scope
      +
      +Scope that rclone should use when requesting access from drive.
      +
      +Properties:
      +
      +- Config:      scope
      +- Env Var:     RCLONE_DRIVE_SCOPE
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "drive"
      +        - Full access all files, excluding Application Data Folder.
      +    - "drive.readonly"
      +        - Read-only access to file metadata and file contents.
      +    - "drive.file"
      +        - Access to files created by rclone only.
      +        - These are visible in the drive website.
      +        - File authorization is revoked when the user deauthorizes the app.
      +    - "drive.appfolder"
      +        - Allows read and write access to the Application Data folder.
      +        - This is not visible in the drive website.
      +    - "drive.metadata.readonly"
      +        - Allows read-only access to file metadata but
      +        - does not allow any access to read or download file content.
      +
      +#### --drive-service-account-file
      +
      +Service Account Credentials JSON file path.
      +
      +Leave blank normally.
      +Needed only if you want use SA instead of interactive login.
      +
      +Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
      +
      +Properties:
      +
      +- Config:      service_account_file
      +- Env Var:     RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-alternate-export
      +
      +Deprecated: No longer needed.
      +
      +Properties:
      +
      +- Config:      alternate_export
      +- Env Var:     RCLONE_DRIVE_ALTERNATE_EXPORT
      +- Type:        bool
      +- Default:     false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to drive (Google Drive).
      +
      +#### --drive-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_DRIVE_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_DRIVE_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_DRIVE_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-root-folder-id
      +
      +ID of the root folder.
      +Leave blank normally.
      +
      +Fill in to access "Computers" folders (see docs), or for rclone to use
      +a non root folder as its starting point.
      +
      +
      +Properties:
      +
      +- Config:      root_folder_id
      +- Env Var:     RCLONE_DRIVE_ROOT_FOLDER_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-service-account-credentials
      +
      +Service Account Credentials JSON blob.
      +
      +Leave blank normally.
      +Needed only if you want use SA instead of interactive login.
      +
      +Properties:
      +
      +- Config:      service_account_credentials
      +- Env Var:     RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-team-drive
      +
      +ID of the Shared Drive (Team Drive).
      +
      +Properties:
      +
      +- Config:      team_drive
      +- Env Var:     RCLONE_DRIVE_TEAM_DRIVE
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-auth-owner-only
      +
      +Only consider files owned by the authenticated user.
      +
      +Properties:
      +
      +- Config:      auth_owner_only
      +- Env Var:     RCLONE_DRIVE_AUTH_OWNER_ONLY
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-use-trash
      +
      +Send files to the trash instead of deleting permanently.
      +
      +Defaults to true, namely sending files to the trash.
      +Use `--drive-use-trash=false` to delete files permanently instead.
      +
      +Properties:
      +
      +- Config:      use_trash
      +- Env Var:     RCLONE_DRIVE_USE_TRASH
      +- Type:        bool
      +- Default:     true
      +
      +#### --drive-copy-shortcut-content
      +
      +Server side copy contents of shortcuts instead of the shortcut.
      +
      +When doing server side copies, normally rclone will copy shortcuts as
      +shortcuts.
      +
      +If this flag is used then rclone will copy the contents of shortcuts
      +rather than shortcuts themselves when doing server side copies.
      +
      +Properties:
      +
      +- Config:      copy_shortcut_content
      +- Env Var:     RCLONE_DRIVE_COPY_SHORTCUT_CONTENT
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-skip-gdocs
      +
      +Skip google documents in all listings.
      +
      +If given, gdocs practically become invisible to rclone.
      +
      +Properties:
      +
      +- Config:      skip_gdocs
      +- Env Var:     RCLONE_DRIVE_SKIP_GDOCS
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-skip-checksum-gphotos
      +
      +Skip MD5 checksum on Google photos and videos only.
      +
      +Use this if you get checksum errors when transferring Google photos or
      +videos.
      +
      +Setting this flag will cause Google photos and videos to return a
      +blank MD5 checksum.
      +
      +Google photos are identified by being in the "photos" space.
      +
      +Corrupted checksums are caused by Google modifying the image/video but
      +not updating the checksum.
      +
      +Properties:
      +
      +- Config:      skip_checksum_gphotos
      +- Env Var:     RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-shared-with-me
      +
      +Only show files that are shared with me.
      +
      +Instructs rclone to operate on your "Shared with me" folder (where
      +Google Drive lets you access the files and folders others have shared
      +with you).
      +
      +This works both with the "list" (lsd, lsl, etc.) and the "copy"
      +commands (copy, sync, etc.), and with all other commands too.
      +
      +Properties:
      +
      +- Config:      shared_with_me
      +- Env Var:     RCLONE_DRIVE_SHARED_WITH_ME
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-trashed-only
      +
      +Only show files that are in the trash.
      +
      +This will show trashed files in their original directory structure.
      +
      +Properties:
      +
      +- Config:      trashed_only
      +- Env Var:     RCLONE_DRIVE_TRASHED_ONLY
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-starred-only
      +
      +Only show files that are starred.
      +
      +Properties:
      +
      +- Config:      starred_only
      +- Env Var:     RCLONE_DRIVE_STARRED_ONLY
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-formats
      +
      +Deprecated: See export_formats.
      +
      +Properties:
      +
      +- Config:      formats
      +- Env Var:     RCLONE_DRIVE_FORMATS
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-export-formats
      +
      +Comma separated list of preferred formats for downloading Google docs.
      +
      +Properties:
      +
      +- Config:      export_formats
      +- Env Var:     RCLONE_DRIVE_EXPORT_FORMATS
      +- Type:        string
      +- Default:     "docx,xlsx,pptx,svg"
      +
      +#### --drive-import-formats
      +
      +Comma separated list of preferred formats for uploading Google docs.
      +
      +Properties:
      +
      +- Config:      import_formats
      +- Env Var:     RCLONE_DRIVE_IMPORT_FORMATS
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-allow-import-name-change
      +
      +Allow the filetype to change when uploading Google docs.
      +
      +E.g. file.doc to file.docx. This will confuse sync and reupload every time.
      +
      +Properties:
      +
      +- Config:      allow_import_name_change
      +- Env Var:     RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-use-created-date
      +
      +Use file created date instead of modified date.
      +
      +Useful when downloading data and you want the creation date used in
      +place of the last modified date.
      +
      +**WARNING**: This flag may have some unexpected consequences.
      +
      +When uploading to your drive all files will be overwritten unless they
      +haven't been modified since their creation. And the inverse will occur
      +while downloading.  This side effect can be avoided by using the
      +"--checksum" flag.
      +
      +This feature was implemented to retain photos capture date as recorded
      +by google photos. You will first need to check the "Create a Google
      +Photos folder" option in your google drive settings. You can then copy
      +or move the photos locally and use the date the image was taken
      +(created) set as the modification date.
      +
      +Properties:
      +
      +- Config:      use_created_date
      +- Env Var:     RCLONE_DRIVE_USE_CREATED_DATE
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-use-shared-date
      +
      +Use date file was shared instead of modified date.
      +
      +Note that, as with "--drive-use-created-date", this flag may have
      +unexpected consequences when uploading/downloading files.
      +
      +If both this flag and "--drive-use-created-date" are set, the created
      +date is used.
      +
      +Properties:
      +
      +- Config:      use_shared_date
      +- Env Var:     RCLONE_DRIVE_USE_SHARED_DATE
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-list-chunk
      +
      +Size of listing chunk 100-1000, 0 to disable.
      +
      +Properties:
      +
      +- Config:      list_chunk
      +- Env Var:     RCLONE_DRIVE_LIST_CHUNK
      +- Type:        int
      +- Default:     1000
      +
      +#### --drive-impersonate
      +
      +Impersonate this user when using a service account.
      +
      +Properties:
      +
      +- Config:      impersonate
      +- Env Var:     RCLONE_DRIVE_IMPERSONATE
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-upload-cutoff
      +
      +Cutoff for switching to chunked upload.
      +
      +Properties:
      +
      +- Config:      upload_cutoff
      +- Env Var:     RCLONE_DRIVE_UPLOAD_CUTOFF
      +- Type:        SizeSuffix
      +- Default:     8Mi
      +
      +#### --drive-chunk-size
      +
      +Upload chunk size.
      +
      +Must a power of 2 >= 256k.
      +
      +Making this larger will improve performance, but note that each chunk
      +is buffered in memory one per transfer.
      +
      +Reducing this will reduce memory usage but decrease performance.
      +
      +Properties:
      +
      +- Config:      chunk_size
      +- Env Var:     RCLONE_DRIVE_CHUNK_SIZE
      +- Type:        SizeSuffix
      +- Default:     8Mi
      +
      +#### --drive-acknowledge-abuse
      +
      +Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
      +
      +If downloading a file returns the error "This file has been identified
      +as malware or spam and cannot be downloaded" with the error code
      +"cannotDownloadAbusiveFile" then supply this flag to rclone to
      +indicate you acknowledge the risks of downloading the file and rclone
      +will download it anyway.
      +
      +Note that if you are using service account it will need Manager
      +permission (not Content Manager) to for this flag to work. If the SA
      +does not have the right permission, Google will just ignore the flag.
      +
      +Properties:
      +
      +- Config:      acknowledge_abuse
      +- Env Var:     RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-keep-revision-forever
      +
      +Keep new head revision of each file forever.
      +
      +Properties:
      +
      +- Config:      keep_revision_forever
      +- Env Var:     RCLONE_DRIVE_KEEP_REVISION_FOREVER
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-size-as-quota
      +
      +Show sizes as storage quota usage, not actual size.
      +
      +Show the size of a file as the storage quota used. This is the
      +current version plus any older versions that have been set to keep
      +forever.
      +
      +**WARNING**: This flag may have some unexpected consequences.
      +
      +It is not recommended to set this flag in your config - the
      +recommended usage is using the flag form --drive-size-as-quota when
      +doing rclone ls/lsl/lsf/lsjson/etc only.
      +
      +If you do use this flag for syncing (not recommended) then you will
      +need to use --ignore size also.
      +
      +Properties:
      +
      +- Config:      size_as_quota
      +- Env Var:     RCLONE_DRIVE_SIZE_AS_QUOTA
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-v2-download-min-size
      +
      +If Object's are greater, use drive v2 API to download.
      +
      +Properties:
      +
      +- Config:      v2_download_min_size
      +- Env Var:     RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
      +- Type:        SizeSuffix
      +- Default:     off
      +
      +#### --drive-pacer-min-sleep
      +
      +Minimum time to sleep between API calls.
      +
      +Properties:
      +
      +- Config:      pacer_min_sleep
      +- Env Var:     RCLONE_DRIVE_PACER_MIN_SLEEP
      +- Type:        Duration
      +- Default:     100ms
      +
      +#### --drive-pacer-burst
      +
      +Number of API calls to allow without sleeping.
      +
      +Properties:
      +
      +- Config:      pacer_burst
      +- Env Var:     RCLONE_DRIVE_PACER_BURST
      +- Type:        int
      +- Default:     100
      +
      +#### --drive-server-side-across-configs
      +
      +Deprecated: use --server-side-across-configs instead.
      +
      +Allow server-side operations (e.g. copy) to work across different drive configs.
      +
      +This can be useful if you wish to do a server-side copy between two
      +different Google drives.  Note that this isn't enabled by default
      +because it isn't easy to tell if it will work between any two
      +configurations.
      +
      +Properties:
      +
      +- Config:      server_side_across_configs
      +- Env Var:     RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-disable-http2
      +
      +Disable drive using http2.
      +
      +There is currently an unsolved issue with the google drive backend and
      +HTTP/2.  HTTP/2 is therefore disabled by default for the drive backend
      +but can be re-enabled here.  When the issue is solved this flag will
      +be removed.
      +
      +See: https://github.com/rclone/rclone/issues/3631
      +
      +
      +
      +Properties:
      +
      +- Config:      disable_http2
      +- Env Var:     RCLONE_DRIVE_DISABLE_HTTP2
      +- Type:        bool
      +- Default:     true
      +
      +#### --drive-stop-on-upload-limit
      +
      +Make upload limit errors be fatal.
      +
      +At the time of writing it is only possible to upload 750 GiB of data to
      +Google Drive a day (this is an undocumented limit). When this limit is
      +reached Google Drive produces a slightly different error message. When
      +this flag is set it causes these errors to be fatal.  These will stop
      +the in-progress sync.
      +
      +Note that this detection is relying on error message strings which
      +Google don't document so it may break in the future.
      +
      +See: https://github.com/rclone/rclone/issues/3857
      +
      +
      +Properties:
      +
      +- Config:      stop_on_upload_limit
      +- Env Var:     RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-stop-on-download-limit
      +
      +Make download limit errors be fatal.
      +
      +At the time of writing it is only possible to download 10 TiB of data from
      +Google Drive a day (this is an undocumented limit). When this limit is
      +reached Google Drive produces a slightly different error message. When
      +this flag is set it causes these errors to be fatal.  These will stop
      +the in-progress sync.
      +
      +Note that this detection is relying on error message strings which
      +Google don't document so it may break in the future.
      +
      +
      +Properties:
      +
      +- Config:      stop_on_download_limit
      +- Env Var:     RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-skip-shortcuts
      +
      +If set skip shortcut files.
      +
      +Normally rclone dereferences shortcut files making them appear as if
      +they are the original file (see [the shortcuts section](#shortcuts)).
      +If this flag is set then rclone will ignore shortcut files completely.
      +
      +
      +Properties:
      +
      +- Config:      skip_shortcuts
      +- Env Var:     RCLONE_DRIVE_SKIP_SHORTCUTS
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-skip-dangling-shortcuts
      +
      +If set skip dangling shortcut files.
      +
      +If this is set then rclone will not show any dangling shortcuts in listings.
      +
      +
      +Properties:
      +
      +- Config:      skip_dangling_shortcuts
      +- Env Var:     RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS
      +- Type:        bool
      +- Default:     false
      +
      +#### --drive-resource-key
      +
      +Resource key for accessing a link-shared file.
      +
      +If you need to access files shared with a link like this
      +
      +    https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing
      +
      +Then you will need to use the first part "XXX" as the "root_folder_id"
      +and the second part "YYY" as the "resource_key" otherwise you will get
      +404 not found errors when trying to access the directory.
      +
      +See: https://developers.google.com/drive/api/guides/resource-keys
      +
      +This resource key requirement only applies to a subset of old files.
      +
      +Note also that opening the folder once in the web interface (with the
      +user you've authenticated rclone with) seems to be enough so that the
      +resource key is not needed.
      +
      +
      +Properties:
      +
      +- Config:      resource_key
      +- Env Var:     RCLONE_DRIVE_RESOURCE_KEY
      +- Type:        string
      +- Required:    false
      +
      +#### --drive-fast-list-bug-fix
      +
      +Work around a bug in Google Drive listing.
      +
      +Normally rclone will work around a bug in Google Drive when using
      +--fast-list (ListR) where the search "(A in parents) or (B in
      +parents)" returns nothing sometimes. See #3114, #4289 and
      +https://issuetracker.google.com/issues/149522397
      +
      +Rclone detects this by finding no items in more than one directory
      +when listing and retries them as lists of individual directories.
      +
      +This means that if you have a lot of empty directories rclone will end
      +up listing them all individually and this can take many more API
      +calls.
      +
      +This flag allows the work-around to be disabled. This is **not**
      +recommended in normal use - only if you have a particular case you are
      +having trouble with like many empty directories.
      +
      +
      +Properties:
      +
      +- Config:      fast_list_bug_fix
      +- Env Var:     RCLONE_DRIVE_FAST_LIST_BUG_FIX
      +- Type:        bool
      +- Default:     true
      +
      +#### --drive-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_DRIVE_ENCODING
      +- Type:        MultiEncoder
      +- Default:     InvalidUtf8
      +
      +#### --drive-env-auth
      +
      +Get IAM credentials from runtime (environment variables or instance meta data if no env vars).
      +
      +Only applies if service_account_file and service_account_credentials is blank.
      +
      +Properties:
      +
      +- Config:      env_auth
      +- Env Var:     RCLONE_DRIVE_ENV_AUTH
      +- Type:        bool
      +- Default:     false
      +- Examples:
      +    - "false"
      +        - Enter credentials in the next step.
      +    - "true"
      +        - Get GCP IAM credentials from the environment (env vars or IAM).
      +
      +## Backend commands
      +
      +Here are the commands specific to the drive backend.
      +
      +Run them with
      +
      +    rclone backend COMMAND remote:
      +
      +The help below will explain what arguments each command takes.
      +
      +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
      +info on how to pass options and arguments.
      +
      +These can be run on a running backend using the rc command
      +[backend/command](https://rclone.org/rc/#backend-command).
      +
      +### get
      +
      +Get command for fetching the drive config parameters
      +
      +    rclone backend get remote: [options] [<arguments>+]
      +
      +This is a get command which will be used to fetch the various drive config parameters
      +
      +Usage Examples:
      +
      +    rclone backend get drive: [-o service_account_file] [-o chunk_size]
      +    rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]
      +
      +
      +Options:
      +
      +- "chunk_size": show the current upload chunk size
      +- "service_account_file": show the current service account file
      +
      +### set
      +
      +Set command for updating the drive config parameters
      +
      +    rclone backend set remote: [options] [<arguments>+]
      +
      +This is a set command which will be used to update the various drive config parameters
      +
      +Usage Examples:
      +
      +    rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
      +    rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
      +
      +
      +Options:
      +
      +- "chunk_size": update the current upload chunk size
      +- "service_account_file": update the current service account file
      +
      +### shortcut
      +
      +Create shortcuts from files or directories
      +
      +    rclone backend shortcut remote: [options] [<arguments>+]
      +
      +This command creates shortcuts from files or directories.
      +
      +Usage:
      +
      +    rclone backend shortcut drive: source_item destination_shortcut
      +    rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
      +
      +In the first example this creates a shortcut from the "source_item"
      +which can be a file or a directory to the "destination_shortcut". The
      +"source_item" and the "destination_shortcut" should be relative paths
      +from "drive:"
      +
      +In the second example this creates a shortcut from the "source_item"
      +relative to "drive:" to the "destination_shortcut" relative to
      +"drive2:". This may fail with a permission error if the user
      +authenticated with "drive2:" can't read files from "drive:".
      +
      +
      +Options:
      +
      +- "target": optional target remote for the shortcut destination
      +
      +### drives
      +
      +List the Shared Drives available to this account
      +
      +    rclone backend drives remote: [options] [<arguments>+]
      +
      +This command lists the Shared Drives (Team Drives) available to this
      +account.
      +
      +Usage:
      +
      +    rclone backend [-o config] drives drive:
      +
      +This will return a JSON list of objects like this
      +
      +    [
      +        {
      +            "id": "0ABCDEF-01234567890",
      +            "kind": "drive#teamDrive",
      +            "name": "My Drive"
      +        },
      +        {
      +            "id": "0ABCDEFabcdefghijkl",
      +            "kind": "drive#teamDrive",
      +            "name": "Test Drive"
      +        }
      +    ]
      +
      +With the -o config parameter it will output the list in a format
      +suitable for adding to a config file to make aliases for all the
      +drives found and a combined drive.
      +
      +    [My Drive]
      +    type = alias
      +    remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      +
      +    [Test Drive]
      +    type = alias
      +    remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      +
      +    [AllDrives]
      +    type = combine
      +    upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
      +
      +Adding this to the rclone config file will cause those team drives to
      +be accessible with the aliases shown. Any illegal characters will be
      +substituted with "_" and duplicate names will have numbers suffixed.
      +It will also add a remote called AllDrives which shows all the shared
      +drives combined into one directory tree.
      +
      +
      +### untrash
      +
      +Untrash files and directories
      +
      +    rclone backend untrash remote: [options] [<arguments>+]
      +
      +This command untrashes all the files and directories in the directory
      +passed in recursively.
      +
      +Usage:
      +
      +This takes an optional directory to trash which make this easier to
      +use via the API.
      +
      +    rclone backend untrash drive:directory
      +    rclone backend --interactive untrash drive:directory subdir
      +
      +Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
      +
      +Result:
      +
      +    {
      +        "Untrashed": 17,
      +        "Errors": 0
      +    }
      +
      +
      +### copyid
      +
      +Copy files by ID
      +
      +    rclone backend copyid remote: [options] [<arguments>+]
      +
      +This command copies files by ID
      +
      +Usage:
      +
      +    rclone backend copyid drive: ID path
      +    rclone backend copyid drive: ID1 path1 ID2 path2
      +
      +It copies the drive file with ID given to the path (an rclone path which
      +will be passed internally to rclone copyto). The ID and path pairs can be
      +repeated.
      +
      +The path should end with a / to indicate copy the file as named to
      +this directory. If it doesn't end with a / then the last path
      +component will be used as the file name.
      +
      +If the destination is a drive backend then server-side copying will be
      +attempted if possible.
      +
      +Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
      +
      +
      +### exportformats
      +
      +Dump the export formats for debug purposes
      +
      +    rclone backend exportformats remote: [options] [<arguments>+]
      +
      +### importformats
      +
      +Dump the import formats for debug purposes
      +
      +    rclone backend importformats remote: [options] [<arguments>+]
      +
      +
      +
      +## Limitations
      +
      +Drive has quite a lot of rate limiting.  This causes rclone to be
      +limited to transferring about 2 files per second only.  Individual
      +files may be transferred much faster at 100s of MiB/s but lots of
      +small files can take a long time.
      +
      +Server side copies are also subject to a separate rate limit. If you
      +see User rate limit exceeded errors, wait at least 24 hours and retry.
      +You can disable server-side copies with `--disable copy` to download
      +and upload the files if you prefer.
      +
      +### Limitations of Google Docs
      +
      +Google docs will appear as size -1 in `rclone ls`, `rclone ncdu` etc,
      +and as size 0 in anything which uses the VFS layer, e.g. `rclone mount`
      +and `rclone serve`. When calculating directory totals, e.g. in
      +`rclone size` and `rclone ncdu`, they will be counted in as empty
      +files.
      +
      +This is because rclone can't find out the size of the Google docs
      +without downloading them.
      +
      +Google docs will transfer correctly with `rclone sync`, `rclone copy`
      +etc as rclone knows to ignore the size when doing the transfer.
      +
      +However an unfortunate consequence of this is that you may not be able
      +to download Google docs using `rclone mount`. If it doesn't work you
      +will get a 0 sized file.  If you try again the doc may gain its
      +correct size and be downloadable. Whether it will work on not depends
      +on the application accessing the mount and the OS you are running -
      +experiment to find out if it does work for you!
      +
      +### Duplicated files
      +
      +Sometimes, for no reason I've been able to track down, drive will
      +duplicate a file that rclone uploads.  Drive unlike all the other
      +remotes can have duplicated files.
      +
      +Duplicated files cause problems with the syncing and you will see
      +messages in the log about duplicates.
      +
      +Use `rclone dedupe` to fix duplicated files.
      +
      +Note that this isn't just a problem with rclone, even Google Photos on
      +Android duplicates files on drive sometimes.
      +
      +### Rclone appears to be re-copying files it shouldn't
      +
      +The most likely cause of this is the duplicated file issue above - run
      +`rclone dedupe` and check your logs for duplicate object or directory
      +messages.
      +
      +This can also be caused by a delay/caching on google drive's end when
      +comparing directory listings. Specifically with team drives used in
      +combination with --fast-list. Files that were uploaded recently may
      +not appear on the directory list sent to rclone when using --fast-list.
      +
      +Waiting a moderate period of time between attempts (estimated to be
      +approximately 1 hour) and/or not using --fast-list both seem to be
      +effective in preventing the problem.
      +
      +## Making your own client_id
      +
      +When you use rclone with Google drive in its default configuration you
      +are using rclone's client_id.  This is shared between all the rclone
      +users.  There is a global rate limit on the number of queries per
      +second that each client_id can do set by Google.  rclone already has a
      +high quota and I will continue to make sure it is high enough by
      +contacting Google.
      +
      +It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.
      +
      +Here is how to create your own Google Drive client ID for rclone:
      +
      +1. Log into the [Google API
      +Console](https://console.developers.google.com/) with your Google
      +account. It doesn't matter what Google account you use. (It need not
      +be the same account as the Google Drive you want to access)
      +
      +2. Select a project or create a new project.
      +
      +3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the
      +"Google Drive API".
      +
      +4. Click "Credentials" in the left-side panel (not "Create
      +credentials", which opens the wizard).
      +
      +5. If you already configured an "Oauth Consent Screen", then skip
      +to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button 
      +(near the top right corner of the right panel), then select "External"
      +and click on "CREATE"; on the next screen, enter an "Application name"
      +("rclone" is OK); enter "User Support Email" (your own email is OK); 
      +enter "Developer Contact Email" (your own email is OK); then click on
      +"Save" (all other data is optional). You will also have to add some scopes,
      +including `.../auth/docs` and `.../auth/drive` in order to be able to edit,
      +create and delete files with RClone. You may also want to include the
      +`../auth/drive.metadata.readonly` scope. After adding scopes, click
      +"Save and continue" to add test users. Be sure to add your own account to
      +the test users. Once you've added yourself as a test user and saved the
      +changes, click again on "Credentials" on the left panel to go back to
      +the "Credentials" screen.
      +
      +   (PS: if you are a GSuite user, you could also select "Internal" instead
      +of "External" above, but this will restrict API use to Google Workspace 
      +users in your organisation). 
      +
      +6.  Click on the "+ CREATE CREDENTIALS" button at the top of the screen,
      +then select "OAuth client ID".
      +
      +7. Choose an application type of "Desktop app" and click "Create". (the default name is fine)
      +
      +8. It will show you a client ID and client secret. Make a note of these.
      +   
      +   (If you selected "External" at Step 5 continue to Step 9. 
      +   If you chose "Internal" you don't need to publish and can skip straight to
      +   Step 10 but your destination drive must be part of the same Google Workspace.)
      +
      +9. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
      +   You will also want to add yourself as a test user.
      +
      +10. Provide the noted client ID and client secret to rclone.
      +
      +Be aware that, due to the "enhanced security" recently introduced by
      +Google, you are theoretically expected to "submit your app for verification"
      +and then wait a few weeks(!) for their response; in practice, you can go right
      +ahead and use the client ID and client secret with rclone, the only issue will
      +be a very scary confirmation screen shown when you connect via your browser 
      +for rclone to be able to get its token-id (but as this only happens during 
      +the remote configuration, it's not such a big deal). Keeping the application in
      +"Testing" will work as well, but the limitation is that any grants will expire
      +after a week, which can be annoying to refresh constantly. If, for whatever
      +reason, a short grant time is not a problem, then keeping the application in
      +testing mode would also be sufficient.
      +
      +(Thanks to @balazer on github for these instructions.)
      +
      +Sometimes, creation of an OAuth consent in Google API Console fails due to an error message
      +“The request failed because changes to one of the field of the resource is not supported”.
      +As a convenient workaround, the necessary Google Drive API key can be created on the
      +[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page.
      +Just push the Enable the Drive API button to receive the Client ID and Secret.
      +Note that it will automatically create a new project in the API Console.
      +
      +#  Google Photos
      +
      +The rclone backend for [Google Photos](https://www.google.com/photos/about/) is
      +a specialized backend for transferring photos and videos to and from
      +Google Photos.
      +
      +**NB** The Google Photos API which rclone uses has quite a few
      +limitations, so please read the [limitations section](#limitations)
      +carefully to make sure it is suitable for your use.
      +
      +## Configuration
      +
      +The initial setup for google cloud storage involves getting a token from Google Photos
      +which you need to do in your browser.  `rclone config` walks you
      +through it.
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Google Photos  "google photos" [snip] Storage> google photos ** See help for google photos backend at: https://rclone.org/googlephotos/ **

      +

      Google Application Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Google Application Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Set to make the Google Photos backend read only.

      +

      If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Enter a boolean value (true or false). Press Enter for the default ("false"). read_only> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code

      +

      *** IMPORTANT: All media items uploaded to Google Photos with rclone *** are stored in full resolution at original quality. These uploads *** will count towards storage in your Google Account.

      +
      ---+ - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      ExtensionMime TypeDescription[remote] type = google photos token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
      bmpimage/bmpWindows Bitmap formaty) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ```
      csvtext/csvStandard CSV format for SpreadsheetsSee the remote setup docs for how to set it up on a machine with no Internet browser available.
      docapplication/mswordClassic Word fileNote that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
      docxapplication/vnd.openxmlformats-officedocument.wordprocessingml.documentMicrosoft Office DocumentThis remote is called remote and can now be used like this
      epubapplication/epub+zipE-book formatSee all the albums in your photos
      htmltext/htmlAn HTML Documentrclone lsd remote:album
      jpgimage/jpegA JPEG Image FileMake a new album
      jsonapplication/vnd.google-apps.script+jsonJSON Text Format for Google Apps scriptsrclone mkdir remote:album/newAlbum
      odpapplication/vnd.oasis.opendocument.presentationOpenoffice PresentationList the contents of an album
      odsapplication/vnd.oasis.opendocument.spreadsheetOpenoffice Spreadsheetrclone ls remote:album/newAlbum
      odsapplication/x-vnd.oasis.opendocument.spreadsheetOpenoffice SpreadsheetSync /home/local/images to the Google Photos, removing any excess files in the album.
      odtapplication/vnd.oasis.opendocument.textOpenoffice Documentrclone sync --interactive /home/local/image remote:album/newAlbum
      pdfapplication/pdfAdobe PDF Format### Layout
      pjpegimage/pjpegProgressive JPEG ImageAs Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it.
      pngimage/pngPNG Image FormatThe directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)
      pptxapplication/vnd.openxmlformats-officedocument.presentationml.presentationMicrosoft Office PowerpointNote that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.
      rtfapplication/rtfRich Text Format/ - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg
      svgimage/svg+xmlScalable Vector Graphics FormatThere are two writable parts of the tree, the upload directory and sub directories of the album directory.
      tsvtext/tab-separated-valuesStandard TSV format for spreadsheetsThe upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.
      txttext/plainPlain TextDirectories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do
      wmfapplication/x-msmetafileWindows Meta Filerclone copy /path/to/images remote:album/images
      xlsapplication/vnd.ms-excelClassic Excel fileand the images directory contains
      xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheetMicrosoft Office Spreadsheetimages - file1.jpg dir file2.jpg dir2 dir3 file3.jpg
      zipapplication/zipA ZIP file of HTML, Images CSSThen rclone will create the following albums with the following files in
      - images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg
      This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.
      The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
      ### Standard options
      Here are the Standard options specific to google photos (Google Photos).
      #### --gphotos-client-id
      OAuth Client Id.
      Leave blank normally.
      Properties:
      - Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Required: false
      #### --gphotos-client-secret
      OAuth Client Secret.
      Leave blank normally.
      Properties:
      - Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: string - Required: false
      #### --gphotos-read-only
      Set to make the Google Photos backend read only.
      If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.
      Properties:
      - Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false
      ### Advanced options
      Here are the Advanced options specific to google photos (Google Photos).
      #### --gphotos-token
      OAuth Access Token as a JSON blob.
      Properties:
      - Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Required: false
      #### --gphotos-auth-url
      Auth server URL.
      Leave blank to use the provider defaults.
      Properties:
      - Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Required: false
      #### --gphotos-token-url
      Token server url.
      Leave blank to use the provider defaults.
      Properties:
      - Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Required: false
      #### --gphotos-read-size
      Set to read the size of media items.
      Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
      Properties:
      - Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false
      #### --gphotos-start-year
      Year limits the photos to be downloaded to those which are uploaded after the given year.
      Properties:
      - Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000
      #### --gphotos-include-archived
      Also view and download archived media.
      By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred.
      Note that media in albums is always visible and synced, no matter their archive status.
      With this flag, archived media are always visible in directory listings and transferred.
      Without this flag, archived media will not be visible in directory listings and won't be transferred.
      Properties:
      - Config: include_archived - Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED - Type: bool - Default: false
      #### --gphotos-encoding
      The encoding for the backend.
      See the encoding section in the overview for more info.
      Properties:
      - Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot
      ## Limitations
      Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
      Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
      rclone about is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
      See List of backends that do not support rclone about See rclone about
      ### Downloading Images
      When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
      The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
      ### Downloading Videos
      When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
      ### Duplicates
      If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).
      If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.
      ### Modified time
      The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
      This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.
      ### Size
      The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.
      It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.
      If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.
      ### Albums
      Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.
      Rclone can remove files it uploaded from albums it created only.
      ### Deleting files
      Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.
      Rclone cannot delete files anywhere except under album.
      ### Deleting albums
      The Google Photos API does not support deleting albums - see bug #135714733.
      # Hasher
      Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files
      ## Getting started
      To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working.
      Let's call the base remote myRemote:path here. Note that anything inside myRemote:path will be handled by hasher and anything outside won't. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote s3:bucket.
      Now proceed to interactive or manual configuration.
      ### Interactive configuration
      Run rclone config: ``` No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> Hasher1 Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Handle checksums for other remotes  "hasher" [snip] Storage> hasher Remote to cache checksums for, like myremote:mypath. Enter a string value. Press Enter for the default (""). remote> myRemote:path Comma separated list of supported checksum types. Enter a string value. Press Enter for the default ("md5,sha1"). hashsums> md5 Maximum time to keep checksums in cache. 0 = no cache, off = cache forever. max_age> off Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config
      -

      Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a --drive-export-formats parameter. They will match all available Google Documents.

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      ExtensionDescriptionOS Support
      desktopfreedesktop.org specified desktop entryLinux
      link.htmlAn HTML Document with a redirectAll
      urlINI style link filemacOS, Windows
      weblocmacOS specific XML formatmacOS
      -

      Standard options

      -

      Here are the Standard options specific to drive (Google Drive).

      -

      --drive-client-id

      -

      Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.

      -

      Properties:

      -
        -
      • Config: client_id
      • -
      • Env Var: RCLONE_DRIVE_CLIENT_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-client-secret

      -

      OAuth Client Secret.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_secret
      • -
      • Env Var: RCLONE_DRIVE_CLIENT_SECRET
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-scope

      -

      Scope that rclone should use when requesting access from drive.

      -

      Properties:

      -
        -
      • Config: scope
      • -
      • Env Var: RCLONE_DRIVE_SCOPE
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "drive" -
            -
          • Full access all files, excluding Application Data Folder.
          • -
        • -
        • "drive.readonly" -
            -
          • Read-only access to file metadata and file contents.
          • -
        • -
        • "drive.file" -
            -
          • Access to files created by rclone only.
          • -
          • These are visible in the drive website.
          • -
          • File authorization is revoked when the user deauthorizes the app.
          • -
        • -
        • "drive.appfolder" -
            -
          • Allows read and write access to the Application Data folder.
          • -
          • This is not visible in the drive website.
          • -
        • -
        • "drive.metadata.readonly" -
            -
          • Allows read-only access to file metadata but
          • -
          • does not allow any access to read or download file content.
          • -
        • -
      • -
      -

      --drive-service-account-file

      -

      Service Account Credentials JSON file path.

      -

      Leave blank normally. Needed only if you want use SA instead of interactive login.

      -

      Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

      -

      Properties:

      -
        -
      • Config: service_account_file
      • -
      • Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-alternate-export

      -

      Deprecated: No longer needed.

      -

      Properties:

      -
        -
      • Config: alternate_export
      • -
      • Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to drive (Google Drive).

      -

      --drive-token

      -

      OAuth Access Token as a JSON blob.

      -

      Properties:

      -
        -
      • Config: token
      • -
      • Env Var: RCLONE_DRIVE_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-auth-url

      -

      Auth server URL.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: auth_url
      • -
      • Env Var: RCLONE_DRIVE_AUTH_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-token-url

      -

      Token server url.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: token_url
      • -
      • Env Var: RCLONE_DRIVE_TOKEN_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-root-folder-id

      -

      ID of the root folder. Leave blank normally.

      -

      Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

      -

      Properties:

      -
        -
      • Config: root_folder_id
      • -
      • Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-service-account-credentials

      -

      Service Account Credentials JSON blob.

      -

      Leave blank normally. Needed only if you want use SA instead of interactive login.

      -

      Properties:

      -
        -
      • Config: service_account_credentials
      • -
      • Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-team-drive

      -

      ID of the Shared Drive (Team Drive).

      -

      Properties:

      -
        -
      • Config: team_drive
      • -
      • Env Var: RCLONE_DRIVE_TEAM_DRIVE
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-auth-owner-only

      -

      Only consider files owned by the authenticated user.

      -

      Properties:

      -
        -
      • Config: auth_owner_only
      • -
      • Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-use-trash

      -

      Send files to the trash instead of deleting permanently.

      -

      Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead.

      -

      Properties:

      -
        -
      • Config: use_trash
      • -
      • Env Var: RCLONE_DRIVE_USE_TRASH
      • -
      • Type: bool
      • -
      • Default: true
      • -
      -

      --drive-copy-shortcut-content

      -

      Server side copy contents of shortcuts instead of the shortcut.

      -

      When doing server side copies, normally rclone will copy shortcuts as shortcuts.

      -

      If this flag is used then rclone will copy the contents of shortcuts rather than shortcuts themselves when doing server side copies.

      -

      Properties:

      -
        -
      • Config: copy_shortcut_content
      • -
      • Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-skip-gdocs

      -

      Skip google documents in all listings.

      -

      If given, gdocs practically become invisible to rclone.

      -

      Properties:

      -
        -
      • Config: skip_gdocs
      • -
      • Env Var: RCLONE_DRIVE_SKIP_GDOCS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-skip-checksum-gphotos

      -

      Skip MD5 checksum on Google photos and videos only.

      -

      Use this if you get checksum errors when transferring Google photos or videos.

      -

      Setting this flag will cause Google photos and videos to return a blank MD5 checksum.

      -

      Google photos are identified by being in the "photos" space.

      -

      Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.

      -

      Properties:

      -
        -
      • Config: skip_checksum_gphotos
      • -
      • Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-shared-with-me

      -

      Only show files that are shared with me.

      -

      Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).

      -

      This works both with the "list" (lsd, lsl, etc.) and the "copy" commands (copy, sync, etc.), and with all other commands too.

      -

      Properties:

      -
        -
      • Config: shared_with_me
      • -
      • Env Var: RCLONE_DRIVE_SHARED_WITH_ME
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-trashed-only

      -

      Only show files that are in the trash.

      -

      This will show trashed files in their original directory structure.

      -

      Properties:

      -
        -
      • Config: trashed_only
      • -
      • Env Var: RCLONE_DRIVE_TRASHED_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-starred-only

      -

      Only show files that are starred.

      -

      Properties:

      -
        -
      • Config: starred_only
      • -
      • Env Var: RCLONE_DRIVE_STARRED_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-formats

      -

      Deprecated: See export_formats.

      -

      Properties:

      -
        -
      • Config: formats
      • -
      • Env Var: RCLONE_DRIVE_FORMATS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-export-formats

      -

      Comma separated list of preferred formats for downloading Google docs.

      -

      Properties:

      -
        -
      • Config: export_formats
      • -
      • Env Var: RCLONE_DRIVE_EXPORT_FORMATS
      • -
      • Type: string
      • -
      • Default: "docx,xlsx,pptx,svg"
      • -
      -

      --drive-import-formats

      -

      Comma separated list of preferred formats for uploading Google docs.

      -

      Properties:

      -
        -
      • Config: import_formats
      • -
      • Env Var: RCLONE_DRIVE_IMPORT_FORMATS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-allow-import-name-change

      -

      Allow the filetype to change when uploading Google docs.

      -

      E.g. file.doc to file.docx. This will confuse sync and reupload every time.

      -

      Properties:

      -
        -
      • Config: allow_import_name_change
      • -
      • Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-use-created-date

      -

      Use file created date instead of modified date.

      -

      Useful when downloading data and you want the creation date used in place of the last modified date.

      -

      WARNING: This flag may have some unexpected consequences.

      -

      When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.

      -

      This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.

      -

      Properties:

      -
        -
      • Config: use_created_date
      • -
      • Env Var: RCLONE_DRIVE_USE_CREATED_DATE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-use-shared-date

      -

      Use date file was shared instead of modified date.

      -

      Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files.

      -

      If both this flag and "--drive-use-created-date" are set, the created date is used.

      -

      Properties:

      -
        -
      • Config: use_shared_date
      • -
      • Env Var: RCLONE_DRIVE_USE_SHARED_DATE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-list-chunk

      -

      Size of listing chunk 100-1000, 0 to disable.

      -

      Properties:

      -
        -
      • Config: list_chunk
      • -
      • Env Var: RCLONE_DRIVE_LIST_CHUNK
      • -
      • Type: int
      • -
      • Default: 1000
      • -
      -

      --drive-impersonate

      -

      Impersonate this user when using a service account.

      -

      Properties:

      -
        -
      • Config: impersonate
      • -
      • Env Var: RCLONE_DRIVE_IMPERSONATE
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-upload-cutoff

      -

      Cutoff for switching to chunked upload.

      -

      Properties:

      -
        -
      • Config: upload_cutoff
      • -
      • Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
      • -
      • Type: SizeSuffix
      • -
      • Default: 8Mi
      • -
      -

      --drive-chunk-size

      -

      Upload chunk size.

      -

      Must a power of 2 >= 256k.

      -

      Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

      -

      Reducing this will reduce memory usage but decrease performance.

      -

      Properties:

      -
        -
      • Config: chunk_size
      • -
      • Env Var: RCLONE_DRIVE_CHUNK_SIZE
      • -
      • Type: SizeSuffix
      • -
      • Default: 8Mi
      • -
      -

      --drive-acknowledge-abuse

      -

      Set to allow files which return cannotDownloadAbusiveFile to be downloaded.

      -

      If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.

      -

      Note that if you are using service account it will need Manager permission (not Content Manager) to for this flag to work. If the SA does not have the right permission, Google will just ignore the flag.

      -

      Properties:

      -
        -
      • Config: acknowledge_abuse
      • -
      • Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-keep-revision-forever

      -

      Keep new head revision of each file forever.

      -

      Properties:

      -
        -
      • Config: keep_revision_forever
      • -
      • Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-size-as-quota

      -

      Show sizes as storage quota usage, not actual size.

      -

      Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever.

      -

      WARNING: This flag may have some unexpected consequences.

      -

      It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.

      -

      If you do use this flag for syncing (not recommended) then you will need to use --ignore size also.

      -

      Properties:

      -
        -
      • Config: size_as_quota
      • -
      • Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-v2-download-min-size

      -

      If Object's are greater, use drive v2 API to download.

      -

      Properties:

      -
        -
      • Config: v2_download_min_size
      • -
      • Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
      • -
      • Type: SizeSuffix
      • -
      • Default: off
      • -
      -

      --drive-pacer-min-sleep

      -

      Minimum time to sleep between API calls.

      -

      Properties:

      -
        -
      • Config: pacer_min_sleep
      • -
      • Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
      • -
      • Type: Duration
      • -
      • Default: 100ms
      • -
      -

      --drive-pacer-burst

      -

      Number of API calls to allow without sleeping.

      -

      Properties:

      -
        -
      • Config: pacer_burst
      • -
      • Env Var: RCLONE_DRIVE_PACER_BURST
      • -
      • Type: int
      • -
      • Default: 100
      • -
      -

      --drive-server-side-across-configs

      -

      Deprecated: use --server-side-across-configs instead.

      -

      Allow server-side operations (e.g. copy) to work across different drive configs.

      -

      This can be useful if you wish to do a server-side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.

      -

      Properties:

      -
        -
      • Config: server_side_across_configs
      • -
      • Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-disable-http2

      -

      Disable drive using http2.

      -

      There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.

      -

      See: https://github.com/rclone/rclone/issues/3631

      -

      Properties:

      -
        -
      • Config: disable_http2
      • -
      • Env Var: RCLONE_DRIVE_DISABLE_HTTP2
      • -
      • Type: bool
      • -
      • Default: true
      • -
      -

      --drive-stop-on-upload-limit

      -

      Make upload limit errors be fatal.

      -

      At the time of writing it is only possible to upload 750 GiB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.

      -

      Note that this detection is relying on error message strings which Google don't document so it may break in the future.

      -

      See: https://github.com/rclone/rclone/issues/3857

      -

      Properties:

      -
        -
      • Config: stop_on_upload_limit
      • -
      • Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-stop-on-download-limit

      -

      Make download limit errors be fatal.

      -

      At the time of writing it is only possible to download 10 TiB of data from Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.

      -

      Note that this detection is relying on error message strings which Google don't document so it may break in the future.

      -

      Properties:

      -
        -
      • Config: stop_on_download_limit
      • -
      • Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-skip-shortcuts

      -

      If set skip shortcut files.

      -

      Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.

      -

      Properties:

      -
        -
      • Config: skip_shortcuts
      • -
      • Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-skip-dangling-shortcuts

      -

      If set skip dangling shortcut files.

      -

      If this is set then rclone will not show any dangling shortcuts in listings.

      -

      Properties:

      -
        -
      • Config: skip_dangling_shortcuts
      • -
      • Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --drive-resource-key

      -

      Resource key for accessing a link-shared file.

      -

      If you need to access files shared with a link like this

      -
      https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing
      -

      Then you will need to use the first part "XXX" as the "root_folder_id" and the second part "YYY" as the "resource_key" otherwise you will get 404 not found errors when trying to access the directory.

      -

      See: https://developers.google.com/drive/api/guides/resource-keys

      -

      This resource key requirement only applies to a subset of old files.

      -

      Note also that opening the folder once in the web interface (with the user you've authenticated rclone with) seems to be enough so that the resource key is no needed.

      -

      Properties:

      -
        -
      • Config: resource_key
      • -
      • Env Var: RCLONE_DRIVE_RESOURCE_KEY
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --drive-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_DRIVE_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: InvalidUtf8
      • -
      -

      --drive-env-auth

      -

      Get IAM credentials from runtime (environment variables or instance meta data if no env vars).

      -

      Only applies if service_account_file and service_account_credentials is blank.

      -

      Properties:

      -
        -
      • Config: env_auth
      • -
      • Env Var: RCLONE_DRIVE_ENV_AUTH
      • -
      • Type: bool
      • -
      • Default: false
      • -
      • Examples: -
          -
        • "false" -
            -
          • Enter credentials in the next step.
          • -
        • -
        • "true" -
            -
          • Get GCP IAM credentials from the environment (env vars or IAM).
          • -
        • -
      • -
      -

      Backend commands

      -

      Here are the commands specific to the drive backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      -

      The help below will explain what arguments each command takes.

      -

      See the backend command for more info on how to pass options and arguments.

      -

      These can be run on a running backend using the rc command backend/command.

      -

      get

      -

      Get command for fetching the drive config parameters

      -
      rclone backend get remote: [options] [<arguments>+]
      -

      This is a get command which will be used to fetch the various drive config parameters

      -

      Usage Examples:

      -
      rclone backend get drive: [-o service_account_file] [-o chunk_size]
      -rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]
      -

      Options:

      -
        -
      • "chunk_size": show the current upload chunk size
      • -
      • "service_account_file": show the current service account file
      • -
      -

      set

      -

      Set command for updating the drive config parameters

      -
      rclone backend set remote: [options] [<arguments>+]
      -

      This is a set command which will be used to update the various drive config parameters

      -

      Usage Examples:

      -
      rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
      -rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
      -

      Options:

      -
        -
      • "chunk_size": update the current upload chunk size
      • -
      • "service_account_file": update the current service account file
      • -
      -

      shortcut

      -

      Create shortcuts from files or directories

      -
      rclone backend shortcut remote: [options] [<arguments>+]
      -

      This command creates shortcuts from files or directories.

      -

      Usage:

      -
      rclone backend shortcut drive: source_item destination_shortcut
      -rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
      -

      In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:"

      -

      In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:".

      -

      Options:

      -
        -
      • "target": optional target remote for the shortcut destination
      • -
      -

      drives

      -

      List the Shared Drives available to this account

      -
      rclone backend drives remote: [options] [<arguments>+]
      -

      This command lists the Shared Drives (Team Drives) available to this account.

      -

      Usage:

      -
      rclone backend [-o config] drives drive:
      -

      This will return a JSON list of objects like this

      -
      [
      -    {
      -        "id": "0ABCDEF-01234567890",
      -        "kind": "drive#teamDrive",
      -        "name": "My Drive"
      -    },
      -    {
      -        "id": "0ABCDEFabcdefghijkl",
      -        "kind": "drive#teamDrive",
      -        "name": "Test Drive"
      -    }
      -]
      -

      With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.

      -
      [My Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      +

      [Hasher1] type = hasher remote = myRemote:path hashsums = md5 max_age = off -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +### Manual configuration
       
      -[Test Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      +Run `rclone config path` to see the path of current active config file,
      +usually `YOURHOME/.config/rclone/rclone.conf`.
      +Open it in your favorite text editor, find section for the base remote
      +and create new section for hasher like in the following examples:
      +
      +

      [Hasher1] type = hasher remote = myRemote:path hashes = md5 max_age = off

      +

      [Hasher2] type = hasher remote = /local/path hashes = dropbox,sha1 max_age = 24h

      +
      
      +Hasher takes basically the following parameters:
      +- `remote` is required,
      +- `hashes` is a comma separated list of supported checksums
      +   (by default `md5,sha1`),
      +- `max_age` - maximum time to keep a checksum value in the cache,
      +   `0` will disable caching completely,
      +   `off` will cache "forever" (that is until the files get changed).
       
      -[AllDrives]
      -type = combine
      -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
      -

      Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.

      -

      untrash

      -

      Untrash files and directories

      -
      rclone backend untrash remote: [options] [<arguments>+]
      -

      This command untrashes all the files and directories in the directory passed in recursively.

      -

      Usage:

      -

      This takes an optional directory to trash which make this easier to use via the API.

      -
      rclone backend untrash drive:directory
      -rclone backend --interactive untrash drive:directory subdir
      -

      Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.

      -

      Result:

      -
      {
      -    "Untrashed": 17,
      -    "Errors": 0
      -}
      -

      copyid

      -

      Copy files by ID

      -
      rclone backend copyid remote: [options] [<arguments>+]
      -

      This command copies files by ID

      -

      Usage:

      -
      rclone backend copyid drive: ID path
      -rclone backend copyid drive: ID1 path1 ID2 path2
      -

      It copies the drive file with ID given to the path (an rclone path which will be passed internally to rclone copyto). The ID and path pairs can be repeated.

      -

      The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.

      -

      If the destination is a drive backend then server-side copying will be attempted if possible.

      -

      Use the --interactive/-i or --dry-run flag to see what would be copied before copying.

      -

      exportformats

      -

      Dump the export formats for debug purposes

      -
      rclone backend exportformats remote: [options] [<arguments>+]
      -

      importformats

      -

      Dump the import formats for debug purposes

      -
      rclone backend importformats remote: [options] [<arguments>+]
      -

      Limitations

      -

      Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.

      -

      Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy to download and upload the files if you prefer.

      -

      Limitations of Google Docs

      -

      Google docs will appear as size -1 in rclone ls, rclone ncdu etc, and as size 0 in anything which uses the VFS layer, e.g. rclone mount and rclone serve. When calculating directory totals, e.g. in rclone size and rclone ncdu, they will be counted in as empty files.

      -

      This is because rclone can't find out the size of the Google docs without downloading them.

      -

      Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.

      -

      However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!

      -

      Duplicated files

      -

      Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.

      -

      Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

      -

      Use rclone dedupe to fix duplicated files.

      -

      Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.

      -

      Rclone appears to be re-copying files it shouldn't

      -

      The most likely cause of this is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages.

      -

      This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list.

      -

      Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem.

      -

      Making your own client_id

      -

      When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

      -

      It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.

      -

      Here is how to create your own Google Drive client ID for rclone:

      -
        -
      1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

      2. -
      3. Select a project or create a new project.

      4. -
      5. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".

      6. -
      7. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"

      8. -
      9. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). You will also have to add some scopes, including .../auth/docs and .../auth/drive in order to be able to edit, create and delete files with RClone. You may also want to include the ../auth/drive.metadata.readonly scope. After adding scopes, click "Save and continue" to add test users. Be sure to add your own account to the test users. Once you've added yourself as a test user and saved the changes, click again on "Credentials" on the left panel to go back to the "Credentials" screen.

        -

        (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this will restrict API use to Google Workspace users in your organisation).

      10. -
      11. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".

      12. -
      13. Choose an application type of "Desktop app" and click "Create". (the default name is fine)

      14. -
      15. It will show you a client ID and client secret. Make a note of these.

        -

        (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)

      16. -
      17. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.

      18. -
      19. Provide the noted client ID and client secret to rclone.

      20. -
      -

      Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). Keeping the application in "Testing" will work as well, but the limitation is that any grants will expire after a week, which can be annoying to refresh constantly. If, for whatever reason, a short grant time is not a problem, then keeping the application in testing mode would also be sufficient.

      -

      (Thanks to @balazer on github for these instructions.)

      -

      Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console.

      -

      Google Photos

      -

      The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

      -

      NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

      -

      Configuration

      -

      The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Google Photos
      -   \ "google photos"
      -[snip]
      -Storage> google photos
      -** See help for google photos backend at: https://rclone.org/googlephotos/ **
      +Make sure the `remote` has `:` (colon) in. If you specify the remote without
      +a colon then rclone will use a local directory of that name. So if you use
      +a remote of `/local/path` then rclone will handle hashes for that directory.
      +If you use `remote = name` literally then rclone will put files
      +**in a directory called `name` located under current directory**.
       
      -Google Application Client Id
      -Leave blank normally.
      -Enter a string value. Press Enter for the default ("").
      -client_id> 
      -Google Application Client Secret
      -Leave blank normally.
      -Enter a string value. Press Enter for the default ("").
      -client_secret> 
      -Set to make the Google Photos backend read only.
      +## Usage
       
      -If you choose read only then rclone will only request read only access
      -to your photos, otherwise rclone will request full access.
      -Enter a boolean value (true or false). Press Enter for the default ("false").
      -read_only> 
      -Edit advanced config? (y/n)
      -y) Yes
      -n) No
      -y/n> n
      -Remote config
      -Use web browser to automatically authenticate rclone with remote?
      - * Say Y if the machine running rclone has a web browser you can use
      - * Say N if running rclone on a (remote) machine without web browser access
      -If not sure try Y. If Y failed, try N.
      -y) Yes
      -n) No
      -y/n> y
      -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
      -Log in and authorize rclone for access
      -Waiting for code...
      -Got code
      +### Basic operations
       
      -*** IMPORTANT: All media items uploaded to Google Photos with rclone
      -*** are stored in full resolution at original quality.  These uploads
      -*** will count towards storage in your Google Account.
      +Now you can use it as `Hasher2:subdir/file` instead of base remote.
      +Hasher will transparently update cache with new checksums when a file
      +is fully read or overwritten, like:
      +

      rclone copy External:path/file Hasher:dest/path

      +

      rclone cat Hasher:path/to/file > /dev/null

      +
      
      +The way to refresh **all** cached checksums (even unsupported by the base backend)
      +for a subtree is to **re-download** all files in the subtree. For example,
      +use `hashsum --download` using **any** supported hashsum on the command line
      +(we just care to re-read):
      +

      rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null

      +

      rclone backend dump Hasher:path/to/subtree

      +
      
      +You can print or drop hashsum cache using custom backend commands:
      +

      rclone backend dump Hasher:dir/subdir

      +

      rclone backend drop Hasher:

      +
      
      +### Pre-Seed from a SUM File
      +
      +Hasher supports two backend commands: generic SUM file `import` and faster
      +but less consistent `stickyimport`.
      +
      +

      rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]

      +
      
      +Instead of SHA1 it can be any hash supported by the remote. The last argument
      +can point to either a local or an `other-remote:path` text file in SUM format.
      +The command will parse the SUM file, then walk down the path given by the
      +first argument, snapshot current fingerprints and fill in the cache entries
      +correspondingly.
      +- Paths in the SUM file are treated as relative to `hasher:dir/subdir`.
      +- The command will **not** check that supplied values are correct.
      +  You **must know** what you are doing.
      +- This is a one-time action. The SUM file will not get "attached" to the
      +  remote. Cache entries can still be overwritten later, should the object's
      +  fingerprint change.
      +- The tree walk can take long depending on the tree size. You can increase
      +  `--checkers` to make it faster. Or use `stickyimport` if you don't care
      +  about fingerprints and consistency.
      +
      +

      rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1

      +
      
      +`stickyimport` is similar to `import` but works much faster because it
      +does not need to stat existing files and skips initial tree walk.
      +Instead of binding cache entries to file fingerprints it creates _sticky_
      +entries bound to the file name alone ignoring size, modification time etc.
      +Such hash entries can be replaced only by `purge`, `delete`, `backend drop`
      +or by full re-read/re-write of the files.
      +
      +## Configuration reference
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to hasher (Better checksums for other remotes).
      +
      +#### --hasher-remote
      +
      +Remote to cache checksums for (e.g. myRemote:path).
      +
      +Properties:
      +
      +- Config:      remote
      +- Env Var:     RCLONE_HASHER_REMOTE
      +- Type:        string
      +- Required:    true
      +
      +#### --hasher-hashes
       
      ---------------------
      -[remote]
      -type = google photos
      -token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      See the remote setup docs for how to set it up on a machine with no Internet browser available.

      -

      Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

      -

      This remote is called remote and can now be used like this

      -

      See all the albums in your photos

      -
      rclone lsd remote:album
      -

      Make a new album

      -
      rclone mkdir remote:album/newAlbum
      -

      List the contents of an album

      -
      rclone ls remote:album/newAlbum
      -

      Sync /home/local/images to the Google Photos, removing any excess files in the album.

      -
      rclone sync --interactive /home/local/image remote:album/newAlbum
      -

      Layout

      -

      As Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it.

      -

      The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)

      -

      Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.

      -
      /
      -- upload
      -    - file1.jpg
      -    - file2.jpg
      -    - ...
      -- media
      -    - all
      -        - file1.jpg
      -        - file2.jpg
      -        - ...
      -    - by-year
      -        - 2000
      -            - file1.jpg
      -            - ...
      -        - 2001
      -            - file2.jpg
      -            - ...
      -        - ...
      -    - by-month
      -        - 2000
      -            - 2000-01
      -                - file1.jpg
      -                - ...
      -            - 2000-02
      -                - file2.jpg
      -                - ...
      -        - ...
      -    - by-day
      -        - 2000
      -            - 2000-01-01
      -                - file1.jpg
      -                - ...
      -            - 2000-01-02
      -                - file2.jpg
      -                - ...
      -        - ...
      -- album
      -    - album name
      -    - album name/sub
      -- shared-album
      -    - album name
      -    - album name/sub
      -- feature
      -    - favorites
      -        - file1.jpg
      -        - file2.jpg
      -

      There are two writable parts of the tree, the upload directory and sub directories of the album directory.

      -

      The upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.

      -

      Directories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do

      -
      rclone copy /path/to/images remote:album/images
      -

      and the images directory contains

      -
      images
      -    - file1.jpg
      -    dir
      -        file2.jpg
      -    dir2
      -        dir3
      -            file3.jpg
      -

      Then rclone will create the following albums with the following files in

      -
        -
      • images -
          -
        • file1.jpg
        • -
      • -
      • images/dir -
          -
        • file2.jpg
        • -
      • -
      • images/dir2/dir3 -
          -
        • file3.jpg
        • -
      • -
      -

      This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

      -

      The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

      -

      Standard options

      -

      Here are the Standard options specific to google photos (Google Photos).

      -

      --gphotos-client-id

      -

      OAuth Client Id.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_id
      • -
      • Env Var: RCLONE_GPHOTOS_CLIENT_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gphotos-client-secret

      -

      OAuth Client Secret.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_secret
      • -
      • Env Var: RCLONE_GPHOTOS_CLIENT_SECRET
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gphotos-read-only

      -

      Set to make the Google Photos backend read only.

      -

      If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.

      -

      Properties:

      -
        -
      • Config: read_only
      • -
      • Env Var: RCLONE_GPHOTOS_READ_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to google photos (Google Photos).

      -

      --gphotos-token

      -

      OAuth Access Token as a JSON blob.

      -

      Properties:

      -
        -
      • Config: token
      • -
      • Env Var: RCLONE_GPHOTOS_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gphotos-auth-url

      -

      Auth server URL.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: auth_url
      • -
      • Env Var: RCLONE_GPHOTOS_AUTH_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gphotos-token-url

      -

      Token server url.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: token_url
      • -
      • Env Var: RCLONE_GPHOTOS_TOKEN_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --gphotos-read-size

      -

      Set to read the size of media items.

      -

      Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.

      -

      Properties:

      -
        -
      • Config: read_size
      • -
      • Env Var: RCLONE_GPHOTOS_READ_SIZE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gphotos-start-year

      -

      Year limits the photos to be downloaded to those which are uploaded after the given year.

      -

      Properties:

      -
        -
      • Config: start_year
      • -
      • Env Var: RCLONE_GPHOTOS_START_YEAR
      • -
      • Type: int
      • -
      • Default: 2000
      • -
      -

      --gphotos-include-archived

      -

      Also view and download archived media.

      -

      By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred.

      -

      Note that media in albums is always visible and synced, no matter their archive status.

      -

      With this flag, archived media are always visible in directory listings and transferred.

      -

      Without this flag, archived media will not be visible in directory listings and won't be transferred.

      -

      Properties:

      -
        -
      • Config: include_archived
      • -
      • Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --gphotos-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_GPHOTOS_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,CrLf,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.

      -

      Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..

      -

      rclone about is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      -

      See List of backends that do not support rclone about See rclone about

      -

      Downloading Images

      -

      When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

      -

      The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort

      -

      Downloading Videos

      -

      When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

      -

      Duplicates

      -

      If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

      -

      If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

      -

      Modified time

      -

      The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

      -

      This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.

      -

      Size

      -

      The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.

      -

      It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.

      -

      If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.

      -

      Albums

      -

      Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.

      -

      Rclone can remove files it uploaded from albums it created only.

      -

      Deleting files

      -

      Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.

      -

      Rclone cannot delete files anywhere except under album.

      -

      Deleting albums

      -

      The Google Photos API does not support deleting albums - see bug #135714733.

      -

      Hasher

      -

      Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files

      -

      Getting started

      -

      To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working.

      -

      Let's call the base remote myRemote:path here. Note that anything inside myRemote:path will be handled by hasher and anything outside won't. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote s3:bucket.

      -

      Now proceed to interactive or manual configuration.

      -

      Interactive configuration

      -

      Run rclone config:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> Hasher1
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Handle checksums for other remotes
      -   \ "hasher"
      -[snip]
      -Storage> hasher
      -Remote to cache checksums for, like myremote:mypath.
      -Enter a string value. Press Enter for the default ("").
      -remote> myRemote:path
       Comma separated list of supported checksum types.
      -Enter a string value. Press Enter for the default ("md5,sha1").
      -hashsums> md5
      -Maximum time to keep checksums in cache. 0 = no cache, off = cache forever.
      -max_age> off
      -Edit advanced config? (y/n)
      -y) Yes
      -n) No
      -y/n> n
      -Remote config
      ---------------------
      -[Hasher1]
      -type = hasher
      -remote = myRemote:path
      -hashsums = md5
      -max_age = off
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Manual configuration

      -

      Run rclone config path to see the path of current active config file, usually YOURHOME/.config/rclone/rclone.conf. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples:

      -
      [Hasher1]
      -type = hasher
      -remote = myRemote:path
      -hashes = md5
      -max_age = off
       
      -[Hasher2]
      -type = hasher
      -remote = /local/path
      -hashes = dropbox,sha1
      -max_age = 24h
      -

      Hasher takes basically the following parameters: - remote is required, - hashes is a comma separated list of supported checksums (by default md5,sha1), - max_age - maximum time to keep a checksum value in the cache, 0 will disable caching completely, off will cache "forever" (that is until the files get changed).

      -

      Make sure the remote has : (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use a remote of /local/path then rclone will handle hashes for that directory. If you use remote = name literally then rclone will put files in a directory called name located under current directory.

      -

      Usage

      -

      Basic operations

      -

      Now you can use it as Hasher2:subdir/file instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like:

      -
      rclone copy External:path/file Hasher:dest/path
      +Properties:
       
      -rclone cat Hasher:path/to/file > /dev/null
      -

      The way to refresh all cached checksums (even unsupported by the base backend) for a subtree is to re-download all files in the subtree. For example, use hashsum --download using any supported hashsum on the command line (we just care to re-read):

      -
      rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
      +- Config:      hashes
      +- Env Var:     RCLONE_HASHER_HASHES
      +- Type:        CommaSepList
      +- Default:     md5,sha1
       
      -rclone backend dump Hasher:path/to/subtree
      -

      You can print or drop hashsum cache using custom backend commands:

      -
      rclone backend dump Hasher:dir/subdir
      +#### --hasher-max-age
       
      -rclone backend drop Hasher:
      -

      Pre-Seed from a SUM File

      -

      Hasher supports two backend commands: generic SUM file import and faster but less consistent stickyimport.

      -
      rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]
      -

      Instead of SHA1 it can be any hash supported by the remote. The last argument can point to either a local or an other-remote:path text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. - Paths in the SUM file are treated as relative to hasher:dir/subdir. - The command will not check that supplied values are correct. You must know what you are doing. - This is a one-time action. The SUM file will not get "attached" to the remote. Cache entries can still be overwritten later, should the object's fingerprint change. - The tree walk can take long depending on the tree size. You can increase --checkers to make it faster. Or use stickyimport if you don't care about fingerprints and consistency.

      -
      rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
      -

      stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

      -

      Configuration reference

      -

      Standard options

      -

      Here are the Standard options specific to hasher (Better checksums for other remotes).

      -

      --hasher-remote

      -

      Remote to cache checksums for (e.g. myRemote:path).

      -

      Properties:

      -
        -
      • Config: remote
      • -
      • Env Var: RCLONE_HASHER_REMOTE
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --hasher-hashes

      -

      Comma separated list of supported checksum types.

      -

      Properties:

      -
        -
      • Config: hashes
      • -
      • Env Var: RCLONE_HASHER_HASHES
      • -
      • Type: CommaSepList
      • -
      • Default: md5,sha1
      • -
      -

      --hasher-max-age

      -

      Maximum time to keep checksums in cache (0 = no cache, off = cache forever).

      -

      Properties:

      -
        -
      • Config: max_age
      • -
      • Env Var: RCLONE_HASHER_MAX_AGE
      • -
      • Type: Duration
      • -
      • Default: off
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to hasher (Better checksums for other remotes).

      -

      --hasher-auto-size

      -

      Auto-update checksum for files smaller than this size (disabled by default).

      -

      Properties:

      -
        -
      • Config: auto_size
      • -
      • Env Var: RCLONE_HASHER_AUTO_SIZE
      • -
      • Type: SizeSuffix
      • -
      • Default: 0
      • -
      -

      Metadata

      -

      Any metadata supported by the underlying remote is read and written.

      -

      See the metadata docs for more info.

      -

      Backend commands

      -

      Here are the commands specific to the hasher backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      -

      The help below will explain what arguments each command takes.

      -

      See the backend command for more info on how to pass options and arguments.

      -

      These can be run on a running backend using the rc command backend/command.

      -

      drop

      -

      Drop cache

      -
      rclone backend drop remote: [options] [<arguments>+]
      -

      Completely drop checksum cache. Usage Example: rclone backend drop hasher:

      -

      dump

      -

      Dump the database

      -
      rclone backend dump remote: [options] [<arguments>+]
      -

      Dump cache records covered by the current remote

      -

      fulldump

      -

      Full dump of the database

      -
      rclone backend fulldump remote: [options] [<arguments>+]
      -

      Dump all cache records in the database

      -

      import

      -

      Import a SUM file

      -
      rclone backend import remote: [options] [<arguments>+]
      -

      Amend hash cache from a SUM file and bind checksums to files by size/time. Usage Example: rclone backend import hasher:subdir md5 /path/to/sum.md5

      -

      stickyimport

      -

      Perform fast import of a SUM file

      -
      rclone backend stickyimport remote: [options] [<arguments>+]
      -

      Fill hash cache from a SUM file without verifying file fingerprints. Usage Example: rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5

      -

      Implementation details (advanced)

      -

      This section explains how various rclone operations work on a hasher remote.

      -

      Disclaimer. This section describes current implementation which can change in future rclone versions!.

      -

      Hashsum command

      -

      The rclone hashsum (or md5sum or sha1sum) command will:

      -
        -
      1. if requested hash is supported by lower level, just pass it.
      2. -
      3. if object size is below auto_size then download object and calculate requested hashes on the fly.
      4. -
      5. if unsupported and the size is big enough, build object fingerprint (including size, modtime if supported, first-found other hash if any).
      6. -
      7. if the strict match is found in cache for the requested remote, return the stored hash.
      8. -
      9. if remote found but fingerprint mismatched, then purge the entry and proceed to step 6.
      10. -
      11. if remote not found or had no requested hash type or after step 5: download object, calculate all supported hashes on the fly and store in cache; return requested hash.
      12. +Maximum time to keep checksums in cache (0 = no cache, off = cache forever). + +Properties: + +- Config: max_age +- Env Var: RCLONE_HASHER_MAX_AGE +- Type: Duration +- Default: off + +### Advanced options + +Here are the Advanced options specific to hasher (Better checksums for other remotes). + +#### --hasher-auto-size + +Auto-update checksum for files smaller than this size (disabled by default). + +Properties: + +- Config: auto_size +- Env Var: RCLONE_HASHER_AUTO_SIZE +- Type: SizeSuffix +- Default: 0 + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + +## Backend commands + +Here are the commands specific to the hasher backend. + +Run them with + + rclone backend COMMAND remote: + +The help below will explain what arguments each command takes. + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + +These can be run on a running backend using the rc command +[backend/command](https://rclone.org/rc/#backend-command). + +### drop + +Drop cache + + rclone backend drop remote: [options] [<arguments>+] + +Completely drop checksum cache. +Usage Example: + rclone backend drop hasher: + + +### dump + +Dump the database + + rclone backend dump remote: [options] [<arguments>+] + +Dump cache records covered by the current remote + +### fulldump + +Full dump of the database + + rclone backend fulldump remote: [options] [<arguments>+] + +Dump all cache records in the database + +### import + +Import a SUM file + + rclone backend import remote: [options] [<arguments>+] + +Amend hash cache from a SUM file and bind checksums to files by size/time. +Usage Example: + rclone backend import hasher:subdir md5 /path/to/sum.md5 + + +### stickyimport + +Perform fast import of a SUM file + + rclone backend stickyimport remote: [options] [<arguments>+] + +Fill hash cache from a SUM file without verifying file fingerprints. +Usage Example: + rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 + + + + +## Implementation details (advanced) + +This section explains how various rclone operations work on a hasher remote. + +**Disclaimer. This section describes current implementation which can +change in future rclone versions!.** + +### Hashsum command + +The `rclone hashsum` (or `md5sum` or `sha1sum`) command will: + +1. if requested hash is supported by lower level, just pass it. +2. if object size is below `auto_size` then download object and calculate + _requested_ hashes on the fly. +3. if unsupported and the size is big enough, build object `fingerprint` + (including size, modtime if supported, first-found _other_ hash if any). +4. if the strict match is found in cache for the requested remote, return + the stored hash. +5. if remote found but fingerprint mismatched, then purge the entry and + proceed to step 6. +6. if remote not found or had no requested hash type or after step 5: + download object, calculate all _supported_ hashes on the fly and store + in cache; return requested hash. + +### Other operations + +- whenever a file is uploaded or downloaded **in full**, capture the stream + to calculate all supported hashes on the fly and update database +- server-side `move` will update keys of existing cache entries +- `deletefile` will remove a single cache entry +- `purge` will remove all cache entries under the purged path + +Note that setting `max_age = 0` will disable checksum caching completely. + +If you set `max_age = off`, checksums in cache will never age, unless you +fully rewrite or delete the file. + +### Cache storage + +Cached checksums are stored as `bolt` database files under rclone cache +directory, usually `~/.cache/rclone/kv/`. Databases are maintained +one per _base_ backend, named like `BaseRemote~hasher.bolt`. +Checksums for multiple `alias`-es into a single base backend +will be stored in the single database. All local paths are treated as +aliases into the `local` backend (unless encrypted or chunked) and stored +in `~/.cache/rclone/kv/local~hasher.bolt`. +Databases can be shared between multiple rclone processes. + +# HDFS + +[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a +distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework. + +Paths are specified as `remote:` or `remote:path/to/dir`. + +## Configuration + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [skip] XX / Hadoop distributed file system  "hdfs" [skip] Storage> hdfs ** See help for hdfs backend at: https://rclone.org/hdfs/ **

      +

      hadoop name node and port Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to host namenode at port 8020  "namenode:8020" namenode> namenode.hadoop:8020 hadoop user name Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to hdfs as root  "root" username> root Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [remote] type = hdfs namenode = namenode.hadoop:8020 username = root -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes:

      +

      Name Type ==== ==== hadoop hdfs

      +
        +
      1. Edit existing remote
      2. +
      3. New remote
      4. +
      5. Delete remote
      6. +
      7. Rename remote
      8. +
      9. Copy remote
      10. +
      11. Set configuration password
      12. +
      13. Quit config e/n/d/r/c/s/q> q
      -

      Other operations

      -
        -
      • whenever a file is uploaded or downloaded in full, capture the stream to calculate all supported hashes on the fly and update database
      • -
      • server-side move will update keys of existing cache entries
      • -
      • deletefile will remove a single cache entry
      • -
      • purge will remove all cache entries under the purged path
      • -
      -

      Note that setting max_age = 0 will disable checksum caching completely.

      -

      If you set max_age = off, checksums in cache will never age, unless you fully rewrite or delete the file.

      -

      Cache storage

      -

      Cached checksums are stored as bolt database files under rclone cache directory, usually ~/.cache/rclone/kv/. Databases are maintained one per base backend, named like BaseRemote~hasher.bolt. Checksums for multiple alias-es into a single base backend will be stored in the single database. All local paths are treated as aliases into the local backend (unless encrypted or chunked) and stored in ~/.cache/rclone/kv/local~hasher.bolt. Databases can be shared between multiple rclone processes.

      -

      HDFS

      -

      HDFS is a distributed file-system, part of the Apache Hadoop framework.

      -

      Paths are specified as remote: or remote:path/to/dir.

      -

      Configuration

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[skip]
      -XX / Hadoop distributed file system
      -   \ "hdfs"
      -[skip]
      -Storage> hdfs
      -** See help for hdfs backend at: https://rclone.org/hdfs/ **
      +
      
      +This remote is called `remote` and can now be used like this
       
      -hadoop name node and port
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      - 1 / Connect to host namenode at port 8020
      -   \ "namenode:8020"
      -namenode> namenode.hadoop:8020
      -hadoop user name
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      - 1 / Connect to hdfs as root
      -   \ "root"
      -username> root
      -Edit advanced config? (y/n)
      -y) Yes
      -n) No (default)
      -y/n> n
      -Remote config
      ---------------------
      -[remote]
      -type = hdfs
      -namenode = namenode.hadoop:8020
      -username = root
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -Current remotes:
      +See all the top level directories
       
      -Name                 Type
      -====                 ====
      -hadoop               hdfs
      +    rclone lsd remote:
      +
      +List the contents of a directory
      +
      +    rclone ls remote:directory
      +
      +Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
      +
      +    rclone sync --interactive remote:directory /home/local/directory
      +
      +### Setting up your own HDFS instance for testing
      +
      +You may start with a [manual setup](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html)
      +or use the docker image from the tests:
      +
      +If you want to build the docker image
      +
      +

      git clone https://github.com/rclone/rclone.git cd rclone/fstest/testserver/images/test-hdfs docker build --rm -t rclone/test-hdfs .

      +
      
      +Or you can just use the latest one pushed
      +
      +

      docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs

      +
      
      +**NB** it need few seconds to startup.
      +
      +For this docker image the remote needs to be configured like this:
      +
      +

      [remote] type = hdfs namenode = 127.0.0.1:8020 username = root

      +
      
      +You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
      +uploaded will be lost.)
      +
      +### Modified time
      +
      +Time accurate to 1 second is stored.
      +
      +### Checksum
      +
      +No checksums are implemented.
      +
      +### Usage information
      +
      +You can use the `rclone about remote:` command which will display filesystem size and current usage.
      +
      +### Restricted filename characters
      +
      +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +the following characters are also replaced:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| :         | 0x3A  | :           |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8).
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to hdfs (Hadoop distributed file system).
      +
      +#### --hdfs-namenode
      +
      +Hadoop name node and port.
      +
      +E.g. "namenode:8020" to connect to host namenode at port 8020.
      +
      +Properties:
      +
      +- Config:      namenode
      +- Env Var:     RCLONE_HDFS_NAMENODE
      +- Type:        string
      +- Required:    true
      +
      +#### --hdfs-username
      +
      +Hadoop user name.
      +
      +Properties:
      +
      +- Config:      username
      +- Env Var:     RCLONE_HDFS_USERNAME
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "root"
      +        - Connect to hdfs as root.
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to hdfs (Hadoop distributed file system).
      +
      +#### --hdfs-service-principal-name
      +
      +Kerberos service principal name for the namenode.
      +
      +Enables KERBEROS authentication. Specifies the Service Principal Name
      +(SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\"
      +for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
      +
      +Properties:
      +
      +- Config:      service_principal_name
      +- Env Var:     RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
      +- Type:        string
      +- Required:    false
      +
      +#### --hdfs-data-transfer-protection
      +
      +Kerberos data transfer protection: authentication|integrity|privacy.
      +
      +Specifies whether or not authentication, data signature integrity
      +checks, and wire encryption are required when communicating with
      +the datanodes. Possible values are 'authentication', 'integrity'
      +and 'privacy'. Used only with KERBEROS enabled.
      +
      +Properties:
      +
      +- Config:      data_transfer_protection
      +- Env Var:     RCLONE_HDFS_DATA_TRANSFER_PROTECTION
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "privacy"
      +        - Ensure authentication, integrity and encryption enabled.
      +
      +#### --hdfs-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_HDFS_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,Colon,Del,Ctl,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +- No server-side `Move` or `DirMove`.
      +- Checksums not implemented.
      +
      +#  HiDrive
      +
      +Paths are specified as `remote:path`
      +
      +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
      +
      +The initial setup for hidrive involves getting a token from HiDrive
      +which you need to do in your browser.
      +`rclone config` walks you through it.
      +
      +## Configuration
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / HiDrive  "hidrive" [snip] Storage> hidrive OAuth Client Id - Leave blank normally. client_id> OAuth Client Secret - Leave blank normally. client_secret> Access permissions that rclone should use when requesting access from HiDrive. Leave blank normally. scope_access> Edit advanced config? y/n> n Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = hidrive token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"} -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +**You should be aware that OAuth-tokens can be used to access your account
      +and hence should not be shared with other persons.**
      +See the [below section](#keeping-your-tokens-safe) for more information.
      +
      +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
      +machine with no Internet browser available.
      +
      +Note that rclone runs a webserver on your local machine to collect the
      +token as returned from HiDrive. This only runs from the moment it opens
      +your browser to the moment you get back the verification code.
      +The webserver runs on `http://127.0.0.1:53682/`.
      +If local port `53682` is protected by a firewall you may need to temporarily
      +unblock the firewall to complete authorization.
      +
      +Once configured you can then use `rclone` like this,
      +
      +List directories in top level of your HiDrive root folder
      +
      +    rclone lsd remote:
      +
      +List all the files in your HiDrive filesystem
      +
      +    rclone ls remote:
      +
      +To copy a local directory to a HiDrive directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Keeping your tokens safe
      +
      +Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text.
      +Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password.
      +Therefore you should make sure no one else can access your configuration.
      +
      +It is possible to encrypt rclone's configuration file.
      +You can find information on securing your configuration file by viewing the [configuration encryption docs](https://rclone.org/docs/#configuration-encryption).
      +
      +### Invalid refresh token
      +
      +As can be verified [here](https://developer.hidrive.com/basics-flows/),
      +each `refresh_token` (for Native Applications) is valid for 60 days.
      +If used to access HiDrivei, its validity will be automatically extended.
      +
      +This means that if you
      +
      +  * Don't use the HiDrive remote for 60 days
      +
      +then rclone will return an error which includes a text
      +that implies the refresh token is *invalid* or *expired*.
      +
      +To fix this you will need to authorize rclone to access your HiDrive account again.
      +
      +Using
      +
      +    rclone config reconnect remote:
      +
      +the process is very similar to the process of initial setup exemplified before.
      +
      +### Modified time and hashes
      +
      +HiDrive allows modification times to be set on objects accurate to 1 second.
      +
      +HiDrive supports [its own hash type](https://static.hidrive.com/dev/0001)
      +which is used to verify the integrity of file contents after successful transfers.
      +
      +### Restricted filename characters
      +
      +HiDrive cannot store files or folders that include
      +`/` (0x2F) or null-bytes (0x00) in their name.
      +Any other characters can be used in the names of files or folders.
      +Additionally, files or folders cannot be named either of the following: `.` or `..`
      +
      +Therefore rclone will automatically replace these characters,
      +if files or folders are stored or accessed with such names.
      +
      +You can read about how this filename encoding works in general
      +[here](overview/#restricted-filenames).
      +
      +Keep in mind that HiDrive only supports file or folder names
      +with a length of 255 characters or less.
      +
      +### Transfers
      +
      +HiDrive limits file sizes per single request to a maximum of 2 GiB.
      +To allow storage of larger files and allow for better upload performance,
      +the hidrive backend will use a chunked transfer for files larger than 96 MiB.
      +Rclone will upload multiple parts/chunks of the file at the same time.
      +Chunks in the process of being uploaded are buffered in memory,
      +so you may want to restrict this behaviour on systems with limited resources.
      +
      +You can customize this behaviour using the following options:
      +
      +* `chunk_size`: size of file parts
      +* `upload_cutoff`: files larger or equal to this in size will use a chunked transfer
      +* `upload_concurrency`: number of file-parts to upload at the same time
      +
      +See the below section about configuration options for more details.
      +
      +### Root folder
      +
      +You can set the root folder for rclone.
      +This is the directory that rclone considers to be the root of your HiDrive.
      +
      +Usually, you will leave this blank, and rclone will use the root of the account.
      +
      +However, you can set this to restrict rclone to a specific folder hierarchy.
      +
      +This works by prepending the contents of the `root_prefix` option
      +to any paths accessed by rclone.
      +For example, the following two ways to access the home directory are equivalent:
      +
      +    rclone lsd --hidrive-root-prefix="/users/test/" remote:path
      +
      +    rclone lsd remote:/users/test/path
      +
      +See the below section about configuration options for more details.
      +
      +### Directory member count
      +
      +By default, rclone will know the number of directory members contained in a directory.
      +For example, `rclone lsd` uses this information.
      +
      +The acquisition of this information will result in additional time costs for HiDrive's API.
      +When dealing with large directory structures, it may be desirable to circumvent this time cost,
      +especially when this information is not explicitly needed.
      +For this, the `disable_fetching_member_count` option can be used.
      +
      +See the below section about configuration options for more details.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to hidrive (HiDrive).
      +
      +#### --hidrive-client-id
      +
      +OAuth Client Id.
       
      -e) Edit existing remote
      -n) New remote
      -d) Delete remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -e/n/d/r/c/s/q> q
      -

      This remote is called remote and can now be used like this

      -

      See all the top level directories

      -
      rclone lsd remote:
      -

      List the contents of a directory

      -
      rclone ls remote:directory
      -

      Sync the remote directory to /home/local/directory, deleting any excess files.

      -
      rclone sync --interactive remote:directory /home/local/directory
      -

      Setting up your own HDFS instance for testing

      -

      You may start with a manual setup or use the docker image from the tests:

      -

      If you want to build the docker image

      -
      git clone https://github.com/rclone/rclone.git
      -cd rclone/fstest/testserver/images/test-hdfs
      -docker build --rm -t rclone/test-hdfs .
      -

      Or you can just use the latest one pushed

      -
      docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs
      -

      NB it need few seconds to startup.

      -

      For this docker image the remote needs to be configured like this:

      -
      [remote]
      -type = hdfs
      -namenode = 127.0.0.1:8020
      -username = root
      -

      You can stop this image with docker kill rclone-hdfs (NB it does not use volumes, so all data uploaded will be lost.)

      -

      Modified time

      -

      Time accurate to 1 second is stored.

      -

      Checksum

      -

      No checksums are implemented.

      -

      Usage information

      -

      You can use the rclone about remote: command which will display filesystem size and current usage.

      -

      Restricted filename characters

      -

      In addition to the default restricted characters set the following characters are also replaced:

      - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      :0x3A
      -

      Invalid UTF-8 bytes will also be replaced.

      -

      Standard options

      -

      Here are the Standard options specific to hdfs (Hadoop distributed file system).

      -

      --hdfs-namenode

      -

      Hadoop name node and port.

      -

      E.g. "namenode:8020" to connect to host namenode at port 8020.

      -

      Properties:

      -
        -
      • Config: namenode
      • -
      • Env Var: RCLONE_HDFS_NAMENODE
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --hdfs-username

      -

      Hadoop user name.

      -

      Properties:

      -
        -
      • Config: username
      • -
      • Env Var: RCLONE_HDFS_USERNAME
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "root" -
            -
          • Connect to hdfs as root.
          • -
        • -
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to hdfs (Hadoop distributed file system).

      -

      --hdfs-service-principal-name

      -

      Kerberos service principal name for the namenode.

      -

      Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.

      -

      Properties:

      -
        -
      • Config: service_principal_name
      • -
      • Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hdfs-data-transfer-protection

      -

      Kerberos data transfer protection: authentication|integrity|privacy.

      -

      Specifies whether or not authentication, data signature integrity checks, and wire encryption are required when communicating with the datanodes. Possible values are 'authentication', 'integrity' and 'privacy'. Used only with KERBEROS enabled.

      -

      Properties:

      -
        -
      • Config: data_transfer_protection
      • -
      • Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "privacy" -
            -
          • Ensure authentication, integrity and encryption enabled.
          • -
        • -
      • -
      -

      --hdfs-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_HDFS_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      Limitations

      -
        -
      • No server-side Move or DirMove.
      • -
      • Checksums not implemented.
      • -
      -

      HiDrive

      -

      Paths are specified as remote:path

      -

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config walks you through it.

      -

      Configuration

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found - make a new one
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / HiDrive
      -   \ "hidrive"
      -[snip]
      -Storage> hidrive
      -OAuth Client Id - Leave blank normally.
      -client_id>
      -OAuth Client Secret - Leave blank normally.
      -client_secret>
      -Access permissions that rclone should use when requesting access from HiDrive.
       Leave blank normally.
      -scope_access>
      -Edit advanced config?
      -y/n> n
      -Use web browser to automatically authenticate rclone with remote?
      - * Say Y if the machine running rclone has a web browser you can use
      - * Say N if running rclone on a (remote) machine without web browser access
      -If not sure try Y. If Y failed, try N.
      -y/n> y
      -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx
      -Log in and authorize rclone for access
      -Waiting for code...
      -Got code
      ---------------------
      -[remote]
      -type = hidrive
      -token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"}
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      You should be aware that OAuth-tokens can be used to access your account and hence should not be shared with other persons. See the below section for more information.

      -

      See the remote setup docs for how to set it up on a machine with no Internet browser available.

      -

      Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens your browser to the moment you get back the verification code. The webserver runs on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization.

      -

      Once configured you can then use rclone like this,

      -

      List directories in top level of your HiDrive root folder

      -
      rclone lsd remote:
      -

      List all the files in your HiDrive filesystem

      -
      rclone ls remote:
      -

      To copy a local directory to a HiDrive directory called backup

      -
      rclone copy /home/source remote:backup
      -

      Keeping your tokens safe

      -

      Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. Therefore you should make sure no one else can access your configuration.

      -

      It is possible to encrypt rclone's configuration file. You can find information on securing your configuration file by viewing the configuration encryption docs.

      -

      Invalid refresh token

      -

      As can be verified here, each refresh_token (for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended.

      -

      This means that if you

      -
        -
      • Don't use the HiDrive remote for 60 days
      • -
      -

      then rclone will return an error which includes a text that implies the refresh token is invalid or expired.

      -

      To fix this you will need to authorize rclone to access your HiDrive account again.

      -

      Using

      -
      rclone config reconnect remote:
      -

      the process is very similar to the process of initial setup exemplified before.

      -

      Modified time and hashes

      -

      HiDrive allows modification times to be set on objects accurate to 1 second.

      -

      HiDrive supports its own hash type which is used to verify the integrity of file contents after successful transfers.

      -

      Restricted filename characters

      -

      HiDrive cannot store files or folders that include / (0x2F) or null-bytes (0x00) in their name. Any other characters can be used in the names of files or folders. Additionally, files or folders cannot be named either of the following: . or ..

      -

      Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names.

      -

      You can read about how this filename encoding works in general here.

      -

      Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less.

      -

      Transfers

      -

      HiDrive limits file sizes per single request to a maximum of 2 GiB. To allow storage of larger files and allow for better upload performance, the hidrive backend will use a chunked transfer for files larger than 96 MiB. Rclone will upload multiple parts/chunks of the file at the same time. Chunks in the process of being uploaded are buffered in memory, so you may want to restrict this behaviour on systems with limited resources.

      -

      You can customize this behaviour using the following options:

      -
        -
      • chunk_size: size of file parts
      • -
      • upload_cutoff: files larger or equal to this in size will use a chunked transfer
      • -
      • upload_concurrency: number of file-parts to upload at the same time
      • -
      -

      See the below section about configuration options for more details.

      -

      Root folder

      -

      You can set the root folder for rclone. This is the directory that rclone considers to be the root of your HiDrive.

      -

      Usually, you will leave this blank, and rclone will use the root of the account.

      -

      However, you can set this to restrict rclone to a specific folder hierarchy.

      -

      This works by prepending the contents of the root_prefix option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent:

      -
      rclone lsd --hidrive-root-prefix="/users/test/" remote:path
       
      -rclone lsd remote:/users/test/path
      -

      See the below section about configuration options for more details.

      -

      Directory member count

      -

      By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd uses this information.

      -

      The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count option can be used.

      -

      See the below section about configuration options for more details.

      -

      Standard options

      -

      Here are the Standard options specific to hidrive (HiDrive).

      -

      --hidrive-client-id

      -

      OAuth Client Id.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_id
      • -
      • Env Var: RCLONE_HIDRIVE_CLIENT_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hidrive-client-secret

      -

      OAuth Client Secret.

      -

      Leave blank normally.

      -

      Properties:

      -
        -
      • Config: client_secret
      • -
      • Env Var: RCLONE_HIDRIVE_CLIENT_SECRET
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hidrive-scope-access

      -

      Access permissions that rclone should use when requesting access from HiDrive.

      -

      Properties:

      -
        -
      • Config: scope_access
      • -
      • Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS
      • -
      • Type: string
      • -
      • Default: "rw"
      • -
      • Examples: -
          -
        • "rw" -
            -
          • Read and write access to resources.
          • -
        • -
        • "ro" -
            -
          • Read-only access to resources.
          • -
        • -
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to hidrive (HiDrive).

      -

      --hidrive-token

      -

      OAuth Access Token as a JSON blob.

      -

      Properties:

      -
        -
      • Config: token
      • -
      • Env Var: RCLONE_HIDRIVE_TOKEN
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hidrive-auth-url

      -

      Auth server URL.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: auth_url
      • -
      • Env Var: RCLONE_HIDRIVE_AUTH_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hidrive-token-url

      -

      Token server url.

      -

      Leave blank to use the provider defaults.

      -

      Properties:

      -
        -
      • Config: token_url
      • -
      • Env Var: RCLONE_HIDRIVE_TOKEN_URL
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --hidrive-scope-role

      -

      User-level that rclone should use when requesting access from HiDrive.

      -

      Properties:

      -
        -
      • Config: scope_role
      • -
      • Env Var: RCLONE_HIDRIVE_SCOPE_ROLE
      • -
      • Type: string
      • -
      • Default: "user"
      • -
      • Examples: -
          -
        • "user" -
            -
          • User-level access to management permissions.
          • -
          • This will be sufficient in most cases.
          • -
        • -
        • "admin" -
            -
          • Extensive access to management permissions.
          • -
        • -
        • "owner" -
            -
          • Full access to management permissions.
          • -
        • -
      • -
      -

      --hidrive-root-prefix

      -

      The root/parent folder for all paths.

      -

      Fill in to use the specified folder as the parent for all paths given to the remote. This way rclone can use any folder as its starting point.

      -

      Properties:

      -
        -
      • Config: root_prefix
      • -
      • Env Var: RCLONE_HIDRIVE_ROOT_PREFIX
      • -
      • Type: string
      • -
      • Default: "/"
      • -
      • Examples: -
          -
        • "/" -
            -
          • The topmost directory accessible by rclone.
          • -
          • This will be equivalent with "root" if rclone uses a regular HiDrive user account.
          • -
        • -
        • "root" -
            -
          • The topmost directory of the HiDrive user account
          • -
        • -
        • "" -
            -
          • This specifies that there is no root-prefix for your paths.
          • -
          • When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir".
          • -
        • -
      • -
      -

      --hidrive-endpoint

      -

      Endpoint for the service.

      -

      This is the URL that API-calls will be made to.

      -

      Properties:

      -
        -
      • Config: endpoint
      • -
      • Env Var: RCLONE_HIDRIVE_ENDPOINT
      • -
      • Type: string
      • -
      • Default: "https://api.hidrive.strato.com/2.1"
      • -
      -

      --hidrive-disable-fetching-member-count

      -

      Do not fetch number of objects in directories unless it is absolutely necessary.

      -

      Requests may be faster if the number of objects in subdirectories is not fetched.

      -

      Properties:

      -
        -
      • Config: disable_fetching_member_count
      • -
      • Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --hidrive-chunk-size

      -

      Chunksize for chunked uploads.

      -

      Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size.

      -

      The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit or to a negative value will cause uploads to fail.

      -

      Setting this to larger values may increase the upload speed at the cost of using more memory. It can be set to smaller values smaller to save on memory.

      -

      Properties:

      -
        -
      • Config: chunk_size
      • -
      • Env Var: RCLONE_HIDRIVE_CHUNK_SIZE
      • -
      • Type: SizeSuffix
      • -
      • Default: 48Mi
      • -
      -

      --hidrive-upload-cutoff

      -

      Cutoff/Threshold for chunked uploads.

      -

      Any files larger than this will be uploaded in chunks of the configured chunksize.

      -

      The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit will cause uploads to fail.

      -

      Properties:

      -
        -
      • Config: upload_cutoff
      • -
      • Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF
      • -
      • Type: SizeSuffix
      • -
      • Default: 96Mi
      • -
      -

      --hidrive-upload-concurrency

      -

      Concurrency for chunked uploads.

      -

      This is the upper limit for how many transfers for the same file are running concurrently. Setting this above to a value smaller than 1 will cause uploads to deadlock.

      -

      If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

      -

      Properties:

      -
        -
      • Config: upload_concurrency
      • -
      • Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY
      • -
      • Type: int
      • -
      • Default: 4
      • -
      -

      --hidrive-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_HIDRIVE_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,Dot
      • -
      -

      Limitations

      - -

      HiDrive is able to store symbolic links (symlinks) by design, for example, when unpacked from a zip archive.

      -

      There exists no direct mechanism to manage native symlinks in remotes. As such this implementation has chosen to ignore any native symlinks present in the remote. rclone will not be able to access or show any symlinks stored in the hidrive-remote. This means symlinks cannot be individually removed, copied, or moved, except when removing, copying, or moving the parent folder.

      -

      This does not affect the .rclonelink-files that rclone uses to encode and store symbolic links.

      -

      Sparse files

      -

      It is possible to store sparse files in HiDrive.

      -

      Note that copying a sparse file will expand the holes into null-byte (0x00) regions that will then consume disk space. Likewise, when downloading a sparse file, the resulting file will have null-byte regions in the place of file holes.

      -

      HTTP

      -

      The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

      -

      Paths are specified as remote: or remote:path.

      -

      The remote: represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch and path fix, the resolved URL will be https://beta.rclone.org/branch/fix, while with path /fix the resolved URL will be https://beta.rclone.org/fix as the absolute path is resolved from the root of the domain.

      -

      If the path following the remote: ends with / it will be assumed to point to a directory. If the path does not end with /, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv to see details). When --http-no-head is specified, a path without ending / is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /. When you know the path is a directory, ending it with / is always better as it avoids the initial HEAD request.

      -

      To just download a single file it is easier to use copyurl.

      -

      Configuration

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / HTTP
      -   \ "http"
      -[snip]
      -Storage> http
      -URL of http host to connect to
      -Choose a number from below, or type in your own value
      - 1 / Connect to example.com
      -   \ "https://example.com"
      -url> https://beta.rclone.org
      -Remote config
      ---------------------
      -[remote]
      -url = https://beta.rclone.org
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -Current remotes:
      +Properties:
       
      -Name                 Type
      -====                 ====
      -remote               http
      +- Config:      client_id
      +- Env Var:     RCLONE_HIDRIVE_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --hidrive-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_HIDRIVE_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +#### --hidrive-scope-access
      +
      +Access permissions that rclone should use when requesting access from HiDrive.
      +
      +Properties:
      +
      +- Config:      scope_access
      +- Env Var:     RCLONE_HIDRIVE_SCOPE_ACCESS
      +- Type:        string
      +- Default:     "rw"
      +- Examples:
      +    - "rw"
      +        - Read and write access to resources.
      +    - "ro"
      +        - Read-only access to resources.
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to hidrive (HiDrive).
      +
      +#### --hidrive-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_HIDRIVE_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --hidrive-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_HIDRIVE_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --hidrive-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_HIDRIVE_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --hidrive-scope-role
      +
      +User-level that rclone should use when requesting access from HiDrive.
      +
      +Properties:
      +
      +- Config:      scope_role
      +- Env Var:     RCLONE_HIDRIVE_SCOPE_ROLE
      +- Type:        string
      +- Default:     "user"
      +- Examples:
      +    - "user"
      +        - User-level access to management permissions.
      +        - This will be sufficient in most cases.
      +    - "admin"
      +        - Extensive access to management permissions.
      +    - "owner"
      +        - Full access to management permissions.
      +
      +#### --hidrive-root-prefix
      +
      +The root/parent folder for all paths.
      +
      +Fill in to use the specified folder as the parent for all paths given to the remote.
      +This way rclone can use any folder as its starting point.
      +
      +Properties:
      +
      +- Config:      root_prefix
      +- Env Var:     RCLONE_HIDRIVE_ROOT_PREFIX
      +- Type:        string
      +- Default:     "/"
      +- Examples:
      +    - "/"
      +        - The topmost directory accessible by rclone.
      +        - This will be equivalent with "root" if rclone uses a regular HiDrive user account.
      +    - "root"
      +        - The topmost directory of the HiDrive user account
      +    - ""
      +        - This specifies that there is no root-prefix for your paths.
      +        - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir".
      +
      +#### --hidrive-endpoint
      +
      +Endpoint for the service.
      +
      +This is the URL that API-calls will be made to.
      +
      +Properties:
      +
      +- Config:      endpoint
      +- Env Var:     RCLONE_HIDRIVE_ENDPOINT
      +- Type:        string
      +- Default:     "https://api.hidrive.strato.com/2.1"
      +
      +#### --hidrive-disable-fetching-member-count
      +
      +Do not fetch number of objects in directories unless it is absolutely necessary.
      +
      +Requests may be faster if the number of objects in subdirectories is not fetched.
      +
      +Properties:
      +
      +- Config:      disable_fetching_member_count
      +- Env Var:     RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT
      +- Type:        bool
      +- Default:     false
      +
      +#### --hidrive-chunk-size
      +
      +Chunksize for chunked uploads.
      +
      +Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size.
      +
      +The upper limit for this is 2147483647 bytes (about 2.000Gi).
      +That is the maximum amount of bytes a single upload-operation will support.
      +Setting this above the upper limit or to a negative value will cause uploads to fail.
      +
      +Setting this to larger values may increase the upload speed at the cost of using more memory.
      +It can be set to smaller values smaller to save on memory.
      +
      +Properties:
      +
      +- Config:      chunk_size
      +- Env Var:     RCLONE_HIDRIVE_CHUNK_SIZE
      +- Type:        SizeSuffix
      +- Default:     48Mi
      +
      +#### --hidrive-upload-cutoff
      +
      +Cutoff/Threshold for chunked uploads.
      +
      +Any files larger than this will be uploaded in chunks of the configured chunksize.
      +
      +The upper limit for this is 2147483647 bytes (about 2.000Gi).
      +That is the maximum amount of bytes a single upload-operation will support.
      +Setting this above the upper limit will cause uploads to fail.
      +
      +Properties:
      +
      +- Config:      upload_cutoff
      +- Env Var:     RCLONE_HIDRIVE_UPLOAD_CUTOFF
      +- Type:        SizeSuffix
      +- Default:     96Mi
      +
      +#### --hidrive-upload-concurrency
      +
      +Concurrency for chunked uploads.
      +
      +This is the upper limit for how many transfers for the same file are running concurrently.
      +Setting this above to a value smaller than 1 will cause uploads to deadlock.
      +
      +If you are uploading small numbers of large files over high-speed links
      +and these uploads do not fully utilize your bandwidth, then increasing
      +this may help to speed up the transfers.
      +
      +Properties:
      +
      +- Config:      upload_concurrency
      +- Env Var:     RCLONE_HIDRIVE_UPLOAD_CONCURRENCY
      +- Type:        int
      +- Default:     4
      +
      +#### --hidrive-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_HIDRIVE_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,Dot
      +
      +
      +
      +## Limitations
      +
      +### Symbolic links
      +
      +HiDrive is able to store symbolic links (*symlinks*) by design,
      +for example, when unpacked from a zip archive.
      +
      +There exists no direct mechanism to manage native symlinks in remotes.
      +As such this implementation has chosen to ignore any native symlinks present in the remote.
      +rclone will not be able to access or show any symlinks stored in the hidrive-remote.
      +This means symlinks cannot be individually removed, copied, or moved,
      +except when removing, copying, or moving the parent folder.
      +
      +*This does not affect the `.rclonelink`-files
      +that rclone uses to encode and store symbolic links.*
      +
      +### Sparse files
      +
      +It is possible to store sparse files in HiDrive.
      +
      +Note that copying a sparse file will expand the holes
      +into null-byte (0x00) regions that will then consume disk space.
      +Likewise, when downloading a sparse file,
      +the resulting file will have null-byte regions in the place of file holes.
      +
      +#  HTTP
      +
      +The HTTP remote is a read only remote for reading files of a
      +webserver.  The webserver should provide file listings which rclone
      +will read and turn into a remote.  This has been tested with common
      +webservers such as Apache/Nginx/Caddy and will likely work with file
      +listings from most web servers.  (If it doesn't then please file an
      +issue, or send a pull request!)
      +
      +Paths are specified as `remote:` or `remote:path`.
      +
      +The `remote:` represents the configured [url](#http-url), and any path following
      +it will be resolved relative to this url, according to the URL standard. This
      +means with remote url `https://beta.rclone.org/branch` and path `fix`, the
      +resolved URL will be `https://beta.rclone.org/branch/fix`, while with path
      +`/fix` the resolved URL will be `https://beta.rclone.org/fix` as the absolute
      +path is resolved from the root of the domain.
      +
      +If the path following the `remote:` ends with `/` it will be assumed to point
      +to a directory. If the path does not end with `/`, then a HEAD request is sent
      +and the response used to decide if it it is treated as a file or a directory
      +(run with `-vv` to see details). When [--http-no-head](#http-no-head) is
      +specified, a path without ending `/` is always assumed to be a file. If rclone
      +incorrectly assumes the path is a file, the solution is to specify the path with
      +ending `/`. When you know the path is a directory, ending it with `/` is always
      +better as it avoids the initial HEAD request.
      +
      +To just download a single file it is easier to use
      +[copyurl](https://rclone.org/commands/rclone_copyurl/).
      +
      +## Configuration
      +
      +Here is an example of how to make a remote called `remote`.  First
      +run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / HTTP  "http" [snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com  "https://example.com" url> https://beta.rclone.org Remote config -------------------- [remote] url = https://beta.rclone.org -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes:

      +

      Name Type ==== ==== remote http

      +
        +
      1. Edit existing remote
      2. +
      3. New remote
      4. +
      5. Delete remote
      6. +
      7. Rename remote
      8. +
      9. Copy remote
      10. +
      11. Set configuration password
      12. +
      13. Quit config e/n/d/r/c/s/q> q
      14. +
      +
      
      +This remote is called `remote` and can now be used like this
      +
      +See all the top level directories
      +
      +    rclone lsd remote:
      +
      +List the contents of a directory
      +
      +    rclone ls remote:directory
      +
      +Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
      +
      +    rclone sync --interactive remote:directory /home/local/directory
      +
      +### Read only
      +
      +This remote is read only - you can't upload files to an HTTP server.
      +
      +### Modified time
      +
      +Most HTTP servers store time accurate to 1 second.
      +
      +### Checksum
      +
      +No checksums are stored.
      +
      +### Usage without a config file
      +
      +Since the http remote only has one config parameter it is easy to use
      +without a config file:
      +
      +    rclone lsd --http-url https://beta.rclone.org :http:
      +
      +or:
      +
      +    rclone lsd :http,url='https://beta.rclone.org':
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to http (HTTP).
      +
      +#### --http-url
      +
      +URL of HTTP host to connect to.
      +
      +E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
      +
      +Properties:
      +
      +- Config:      url
      +- Env Var:     RCLONE_HTTP_URL
      +- Type:        string
      +- Required:    true
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to http (HTTP).
      +
      +#### --http-headers
      +
      +Set HTTP headers for all transactions.
      +
      +Use this to set additional HTTP headers for all transactions.
      +
      +The input format is comma separated list of key,value pairs.  Standard
      +[CSV encoding](https://godoc.org/encoding/csv) may be used.
      +
      +For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
      +
      +You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
      +
      +Properties:
      +
      +- Config:      headers
      +- Env Var:     RCLONE_HTTP_HEADERS
      +- Type:        CommaSepList
      +- Default:     
      +
      +#### --http-no-slash
      +
      +Set this if the site doesn't end directories with /.
      +
      +Use this if your target website does not use / on the end of
      +directories.
      +
      +A / on the end of a path is how rclone normally tells the difference
      +between files and directories.  If this flag is set, then rclone will
      +treat all files with Content-Type: text/html as directories and read
      +URLs from them rather than downloading them.
      +
      +Note that this may cause rclone to confuse genuine HTML files with
      +directories.
      +
      +Properties:
      +
      +- Config:      no_slash
      +- Env Var:     RCLONE_HTTP_NO_SLASH
      +- Type:        bool
      +- Default:     false
      +
      +#### --http-no-head
      +
      +Don't use HEAD requests.
      +
      +HEAD requests are mainly used to find file sizes in dir listing.
      +If your site is being very slow to load then you can try this option.
      +Normally rclone does a HEAD request for each potential file in a
      +directory listing to:
      +
      +- find its size
      +- check it really exists
      +- check to see if it is a directory
      +
      +If you set this option, rclone will not do the HEAD request. This will mean
      +that directory listings are much quicker, but rclone won't have the times or
      +sizes of any files, and some files that don't exist may be in the listing.
      +
      +Properties:
      +
      +- Config:      no_head
      +- Env Var:     RCLONE_HTTP_NO_HEAD
      +- Type:        bool
      +- Default:     false
      +
      +
      +
      +## Limitations
      +
      +`rclone about` is not supported by the HTTP backend. Backends without
      +this capability cannot determine free space for an rclone mount or
      +use policy `mfs` (most free space) as a member of an rclone union
      +remote.
      +
      +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
      +
      +#  Internet Archive
      +
      +The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
      +
      +Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses.
      +
      +Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
      +command.)  You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
      +
      +Unlike S3, listing up all items uploaded by you isn't supported.
      +
      +Once you have made a remote, you can use it like this:
      +
      +Make a new item
      +
      +    rclone mkdir remote:item
      +
      +List the contents of a item
      +
      +    rclone ls remote:item
      +
      +Sync `/home/local/directory` to the remote item, deleting any excess
      +files in the item.
      +
      +    rclone sync --interactive /home/local/directory remote:item
      +
      +## Notes
      +Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available.
      +The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.
      +
      +You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key.
      +By making it wait, rclone can do normal file comparison.
      +Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue.
      +
      +## About metadata
      +This backend supports setting, updating and reading metadata of each file.
      +The metadata will appear as file metadata on Internet Archive.
      +However, some fields are reserved by both Internet Archive and rclone.
      +
      +The following are reserved by Internet Archive:
      +- `name`
      +- `source`
      +- `size`
      +- `md5`
      +- `crc32`
      +- `sha1`
      +- `format`
      +- `old_version`
      +- `viruscheck`
      +- `summation`
      +
      +Trying to set values to these keys is ignored with a warning.
      +Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
      +
      +rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request.
      +
      +If there are multiple values for a key, only the first one is returned.
      +This is a limitation of rclone, that supports one value per one key.
      +It can be triggered when you did a server-side copy.
      +
      +Reading metadata will also provide custom (non-standard nor reserved) ones.
      +
      +## Filtering auto generated files
      +
      +The Internet Archive automatically creates metadata files after
      +upload. These can cause problems when doing an `rclone sync` as rclone
      +will try, and fail, to delete them. These metadata files are not
      +changeable, as they are created by the Internet Archive automatically.
      +
      +These auto-created files can be excluded from the sync using [metadata
      +filtering](https://rclone.org/filtering/#metadata).
      +
      +    rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
      +
      +Which excludes from the sync any files which have the
      +`source=metadata` or `format=Metadata` flags which are added to
      +Internet Archive auto-created files.
      +
      +## Configuration
      +
      +Here is an example of making an internetarchive configuration.
      +Most applies to the other providers as well, any differences are described [below](#providers).
      +
      +First run
      +
      +    rclone config
      +
      +This will guide you through an interactive setup process.
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. XX / InternetArchive Items  (internetarchive) Storage> internetarchive Option access_key_id. IAS3 Access Key. Leave blank for anonymous access. You can find one here: https://archive.org/account/s3.php Enter a value. Press Enter to leave empty. access_key_id> XXXX Option secret_access_key. IAS3 Secret Key (password). Leave blank for anonymous access. Enter a value. Press Enter to leave empty. secret_access_key> XXXX Edit advanced config? y) Yes n) No (default) y/n> y Option endpoint. IAS3 Endpoint. Leave blank for default value. Enter a string value. Press Enter for the default (https://s3.us.archive.org). endpoint> Option front_endpoint. Host of InternetArchive Frontend. Leave blank for default value. Enter a string value. Press Enter for the default (https://archive.org). front_endpoint> Option disable_checksum. Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading. Enter a boolean value (true or false). Press Enter for the default (true). disable_checksum> true Option encoding. The encoding for the backend. See the encoding section in the overview for more info. Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). encoding> Edit advanced config? y) Yes n) No (default) y/n> n -------------------- [remote] type = internetarchive access_key_id = XXXX secret_access_key = XXXX -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +
      +### Standard options
      +
      +Here are the Standard options specific to internetarchive (Internet Archive).
      +
      +#### --internetarchive-access-key-id
       
      -e) Edit existing remote
      -n) New remote
      -d) Delete remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -e/n/d/r/c/s/q> q
      -

      This remote is called remote and can now be used like this

      -

      See all the top level directories

      -
      rclone lsd remote:
      -

      List the contents of a directory

      -
      rclone ls remote:directory
      -

      Sync the remote directory to /home/local/directory, deleting any excess files.

      -
      rclone sync --interactive remote:directory /home/local/directory
      -

      Read only

      -

      This remote is read only - you can't upload files to an HTTP server.

      -

      Modified time

      -

      Most HTTP servers store time accurate to 1 second.

      -

      Checksum

      -

      No checksums are stored.

      -

      Usage without a config file

      -

      Since the http remote only has one config parameter it is easy to use without a config file:

      -
      rclone lsd --http-url https://beta.rclone.org :http:
      -

      or:

      -
      rclone lsd :http,url='https://beta.rclone.org':
      -

      Standard options

      -

      Here are the Standard options specific to http (HTTP).

      -

      --http-url

      -

      URL of HTTP host to connect to.

      -

      E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.

      -

      Properties:

      -
        -
      • Config: url
      • -
      • Env Var: RCLONE_HTTP_URL
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to http (HTTP).

      -

      --http-headers

      -

      Set HTTP headers for all transactions.

      -

      Use this to set additional HTTP headers for all transactions.

      -

      The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.

      -

      For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.

      -

      You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.

      -

      Properties:

      -
        -
      • Config: headers
      • -
      • Env Var: RCLONE_HTTP_HEADERS
      • -
      • Type: CommaSepList
      • -
      • Default:
      • -
      -

      --http-no-slash

      -

      Set this if the site doesn't end directories with /.

      -

      Use this if your target website does not use / on the end of directories.

      -

      A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.

      -

      Note that this may cause rclone to confuse genuine HTML files with directories.

      -

      Properties:

      -
        -
      • Config: no_slash
      • -
      • Env Var: RCLONE_HTTP_NO_SLASH
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --http-no-head

      -

      Don't use HEAD requests.

      -

      HEAD requests are mainly used to find file sizes in dir listing. If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:

      -
        -
      • find its size
      • -
      • check it really exists
      • -
      • check to see if it is a directory
      • -
      -

      If you set this option, rclone will not do the HEAD request. This will mean that directory listings are much quicker, but rclone won't have the times or sizes of any files, and some files that don't exist may be in the listing.

      -

      Properties:

      -
        -
      • Config: no_head
      • -
      • Env Var: RCLONE_HTTP_NO_HEAD
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      Limitations

      -

      rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      -

      See List of backends that do not support rclone about and rclone about

      -

      Internet Archive

      -

      The Internet Archive backend utilizes Items on archive.org

      -

      Refer to IAS3 API documentation for the API this backend uses.

      -

      Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.

      -

      Unlike S3, listing up all items uploaded by you isn't supported.

      -

      Once you have made a remote, you can use it like this:

      -

      Make a new item

      -
      rclone mkdir remote:item
      -

      List the contents of a item

      -
      rclone ls remote:item
      -

      Sync /home/local/directory to the remote item, deleting any excess files in the item.

      -
      rclone sync --interactive /home/local/directory remote:item
      -

      Notes

      -

      Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.

      -

      You can optionally wait for the server's processing to finish, by setting non-zero value to wait_archive key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. 30m0s for smaller files) as it can take a long time depending on server's queue.

      -

      About metadata

      -

      This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.

      -

      The following are reserved by Internet Archive: - name - source - size - md5 - crc32 - sha1 - format - old_version - viruscheck - summation

      -

      Trying to set values to these keys is ignored with a warning. Only setting mtime is an exception. Doing so make it the identical behavior as setting ModTime.

      -

      rclone reserves all the keys starting with rclone-. Setting value for these keys will give you warnings, but values are set according to request.

      -

      If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy.

      -

      Reading metadata will also provide custom (non-standard nor reserved) ones.

      -

      Filtering auto generated files

      -

      The Internet Archive automatically creates metadata files after upload. These can cause problems when doing an rclone sync as rclone will try, and fail, to delete them. These metadata files are not changeable, as they are created by the Internet Archive automatically.

      -

      These auto-created files can be excluded from the sync using metadata filtering.

      -
      rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
      -

      Which excludes from the sync any files which have the source=metadata or format=Metadata flags which are added to Internet Archive auto-created files.

      -

      Configuration

      -

      Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.

      -

      First run

      -
      rclone config
      -

      This will guide you through an interactive setup process.

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Option Storage.
      -Type of storage to configure.
      -Choose a number from below, or type in your own value.
      -XX / InternetArchive Items
      -   \ (internetarchive)
      -Storage> internetarchive
      -Option access_key_id.
       IAS3 Access Key.
      +
       Leave blank for anonymous access.
       You can find one here: https://archive.org/account/s3.php
      -Enter a value. Press Enter to leave empty.
      -access_key_id> XXXX
      -Option secret_access_key.
      +
      +Properties:
      +
      +- Config:      access_key_id
      +- Env Var:     RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --internetarchive-secret-access-key
      +
       IAS3 Secret Key (password).
      +
       Leave blank for anonymous access.
      -Enter a value. Press Enter to leave empty.
      -secret_access_key> XXXX
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> y
      -Option endpoint.
      +
      +Properties:
      +
      +- Config:      secret_access_key
      +- Env Var:     RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY
      +- Type:        string
      +- Required:    false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to internetarchive (Internet Archive).
      +
      +#### --internetarchive-endpoint
      +
       IAS3 Endpoint.
      +
       Leave blank for default value.
      -Enter a string value. Press Enter for the default (https://s3.us.archive.org).
      -endpoint> 
      -Option front_endpoint.
      +
      +Properties:
      +
      +- Config:      endpoint
      +- Env Var:     RCLONE_INTERNETARCHIVE_ENDPOINT
      +- Type:        string
      +- Default:     "https://s3.us.archive.org"
      +
      +#### --internetarchive-front-endpoint
      +
       Host of InternetArchive Frontend.
      +
       Leave blank for default value.
      -Enter a string value. Press Enter for the default (https://archive.org).
      -front_endpoint> 
      -Option disable_checksum.
      -Don't store MD5 checksum with object metadata.
      +
      +Properties:
      +
      +- Config:      front_endpoint
      +- Env Var:     RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT
      +- Type:        string
      +- Default:     "https://archive.org"
      +
      +#### --internetarchive-disable-checksum
      +
      +Don't ask the server to test against MD5 checksum calculated by rclone.
       Normally rclone will calculate the MD5 checksum of the input before
       uploading it so it can ask the server to check the object against checksum.
       This is great for data integrity checking but can cause long delays for
       large files to start uploading.
      -Enter a boolean value (true or false). Press Enter for the default (true).
      -disable_checksum> true
      -Option encoding.
      +
      +Properties:
      +
      +- Config:      disable_checksum
      +- Env Var:     RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM
      +- Type:        bool
      +- Default:     true
      +
      +#### --internetarchive-wait-archive
      +
      +Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish.
      +Only enable if you need to be guaranteed to be reflected after write operations.
      +0 to disable waiting. No errors to be thrown in case of timeout.
      +
      +Properties:
      +
      +- Config:      wait_archive
      +- Env Var:     RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE
      +- Type:        Duration
      +- Default:     0s
      +
      +#### --internetarchive-encoding
      +
       The encoding for the backend.
      +
       See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      -Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot).
      -encoding> 
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> n
      ---------------------
      -[remote]
      -type = internetarchive
      -access_key_id = XXXX
      -secret_access_key = XXXX
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Standard options

      -

      Here are the Standard options specific to internetarchive (Internet Archive).

      -

      --internetarchive-access-key-id

      -

      IAS3 Access Key.

      -

      Leave blank for anonymous access. You can find one here: https://archive.org/account/s3.php

      -

      Properties:

      -
        -
      • Config: access_key_id
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --internetarchive-secret-access-key

      -

      IAS3 Secret Key (password).

      -

      Leave blank for anonymous access.

      -

      Properties:

      -
        -
      • Config: secret_access_key
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to internetarchive (Internet Archive).

      -

      --internetarchive-endpoint

      -

      IAS3 Endpoint.

      -

      Leave blank for default value.

      -

      Properties:

      -
        -
      • Config: endpoint
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT
      • -
      • Type: string
      • -
      • Default: "https://s3.us.archive.org"
      • -
      -

      --internetarchive-front-endpoint

      -

      Host of InternetArchive Frontend.

      -

      Leave blank for default value.

      -

      Properties:

      -
        -
      • Config: front_endpoint
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT
      • -
      • Type: string
      • -
      • Default: "https://archive.org"
      • -
      -

      --internetarchive-disable-checksum

      -

      Don't ask the server to test against MD5 checksum calculated by rclone. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading.

      -

      Properties:

      -
        -
      • Config: disable_checksum
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM
      • -
      • Type: bool
      • -
      • Default: true
      • -
      -

      --internetarchive-wait-archive

      -

      Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish. Only enable if you need to be guaranteed to be reflected after write operations. 0 to disable waiting. No errors to be thrown in case of timeout.

      -

      Properties:

      -
        -
      • Config: wait_archive
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE
      • -
      • Type: Duration
      • -
      • Default: 0s
      • -
      -

      --internetarchive-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_INTERNETARCHIVE_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      Metadata

      -

      Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key.

      -

      Owner is able to add custom keys. Metadata feature grabs all the keys including them.

      -

      Here are the possible system metadata items for the internetarchive backend.

      - ------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      NameHelpTypeExampleRead Only
      crc32CRC32 calculated by Internet Archivestring01234567Y
      formatName of format identified by Internet ArchivestringComma-Separated ValuesY
      md5MD5 hash calculated by Internet Archivestring01234567012345670123456701234567Y
      mtimeTime of last modification, managed by RcloneRFC 33392006-01-02T15:04:05.999999999ZY
      nameFull file path, without the bucket partfilenamebackend/internetarchive/internetarchive.goY
      old_versionWhether the file was replaced and moved by keep-old-version flagbooleantrueY
      rclone-ia-mtimeTime of last modification, managed by Internet ArchiveRFC 33392006-01-02T15:04:05.999999999ZN
      rclone-mtimeTime of last modification, managed by RcloneRFC 33392006-01-02T15:04:05.999999999ZN
      rclone-update-trackRandom value used by Rclone for tracking changes inside Internet ArchivestringaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaN
      sha1SHA1 hash calculated by Internet Archivestring0123456701234567012345670123456701234567Y
      sizeFile size in bytesdecimal number123456Y
      sourceThe source of the filestringoriginalY
      summationCheck https://forum.rclone.org/t/31922 for how it is usedstringmd5Y
      viruscheckThe last time viruscheck process was run for the file (?)unixtime1654191352Y
      -

      See the metadata docs for more info.

      -

      Jottacloud

      -

      Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is)

      -

      Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.

      -

      Paths are specified as remote:path

      -

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Authentication types

      -

      Some of the whitelabel versions uses a different authentication method than the official service, and you have to choose the correct one when setting up the remote.

      -

      Standard authentication

      -

      The standard authentication method used by the official service (jottacloud.com), as well as some of the whitelabel services, requires you to generate a single-use personal login token from the account security settings in the service's web interface. Log in to your account, go to "Settings" and then "Security", or use the direct link presented to you by rclone when configuring the remote: https://www.jottacloud.com/web/secure. Scroll down to the section "Personal login token", and click the "Generate" button. Note that if you are using a whitelabel service you probably can't use the direct link, you need to find the same page in their dedicated web interface, and also it may be in a different location than described above.

      -

      To access your account from multiple instances of rclone, you need to configure each of them with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one location, and copy the configuration file to a second location where you also want to run rclone and access the same remote. Then you need to replace the token for one of them, using the config reconnect command, which requires you to generate a new personal login token and supply as input. If you do not do this, the token may easily end up being invalidated, resulting in both instances failing with an error message something along the lines of:

      -
      oauth2: cannot fetch token: 400 Bad Request
      -Response: {"error":"invalid_grant","error_description":"Stale token"}
      -

      When this happens, you need to replace the token as described above to be able to use your remote again.

      -

      All personal login tokens you have taken into use will be listed in the web interface under "My logged in devices", and from the right side of that list you can click the "X" button to revoke individual tokens.

      -

      Legacy authentication

      -

      If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.

      -

      Telia Cloud authentication

      -

      Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.

      -

      Tele2 Cloud authentication

      -

      As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.

      -

      Configuration

      -

      Here is an example of how to make a remote called remote with the default setup. First run:

      -
      rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Option Storage.
      -Type of storage to configure.
      -Choose a number from below, or type in your own value.
      -[snip]
      -XX / Jottacloud
      -   \ (jottacloud)
      -[snip]
      -Storage> jottacloud
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> n
      -Option config_type.
      -Select authentication type.
      -Choose a number from below, or type in an existing string value.
      -Press Enter for the default (standard).
      -   / Standard authentication.
      - 1 | Use this if you're a normal Jottacloud user.
      -   \ (standard)
      -   / Legacy authentication.
      - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.
      -   \ (legacy)
      -   / Telia Cloud authentication.
      - 3 | Use this if you are using Telia Cloud.
      -   \ (telia)
      -   / Tele2 Cloud authentication.
      - 4 | Use this if you are using Tele2 Cloud.
      -   \ (tele2)
      -config_type> 1
      -Personal login token.
      -Generate here: https://www.jottacloud.com/web/secure
      -Login Token> <your token here>
      -Use a non-standard device/mountpoint?
      -Choosing no, the default, will let you access the storage used for the archive
      -section of the official Jottacloud client. If you instead want to access the
      -sync or the backup section, for example, you must choose yes.
      -y) Yes
      -n) No (default)
      -y/n> y
      -Option config_device.
      -The device to use. In standard setup the built-in Jotta device is used,
      -which contains predefined mountpoints for archive, sync etc. All other devices
      -are treated as backup devices by the official Jottacloud client. You may create
      -a new by entering a unique name.
      -Choose a number from below, or type in your own string value.
      -Press Enter for the default (DESKTOP-3H31129).
      - 1 > DESKTOP-3H31129
      - 2 > Jotta
      -config_device> 2
      -Option config_mountpoint.
      -The mountpoint to use for the built-in device Jotta.
      -The standard setup is to use the Archive mountpoint. Most other mountpoints
      -have very limited support in rclone and should generally be avoided.
      -Choose a number from below, or type in an existing string value.
      -Press Enter for the default (Archive).
      - 1 > Archive
      - 2 > Shared
      - 3 > Sync
      -config_mountpoint> 1
      ---------------------
      -[remote]
      -type = jottacloud
      -configVersion = 1
      -client_id = jottacli
      -client_secret =
      -tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token
      -token = {........}
      -username = 2940e57271a93d987d6f8a21
      -device = Jotta
      -mountpoint = Archive
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Once configured you can then use rclone like this,

      -

      List directories in top level of your Jottacloud

      -
      rclone lsd remote:
      -

      List all the files in your Jottacloud

      -
      rclone ls remote:
      -

      To copy a local directory to an Jottacloud directory called backup

      -
      rclone copy /home/source remote:backup
      -

      Devices and Mountpoints

      -

      The official Jottacloud client registers a device for each computer you install it on, and shows them in the backup section of the user interface. For each folder you select for backup it will create a mountpoint within this device. A built-in device called Jotta is special, and contains mountpoints Archive, Sync and some others, used for corresponding features in official clients.

      -

      With rclone you'll want to use the standard Jotta/Archive device/mountpoint in most cases. However, you may for example want to access files from the sync or backup functionality provided by the official clients, and rclone therefore provides the option to select other devices and mountpoints during config.

      -

      You are allowed to create new devices and mountpoints. All devices except the built-in Jotta device are treated as backup devices by official Jottacloud clients, and the mountpoints on them are individual backup sets.

      -

      With the built-in Jotta device, only existing, built-in, mountpoints can be selected. In addition to the mentioned Archive and Sync, it may contain several other mountpoints such as: Latest, Links, Shared and Trash. All of these are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.

      -

      --fast-list

      -

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      -

      Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.

      -

      Note also that with rclone version 1.58 and newer information about MIME types are not available when using --fast-list.

      -

      Modified time and hashes

      -

      Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

      -

      Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

      -

      Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by --temp-dir) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for encrypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).

      -

      Restricted filename characters

      -

      In addition to the default restricted characters set the following characters are also replaced:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      "0x22
      *0x2A
      :0x3A
      <0x3C
      >0x3E
      ?0x3F
      |0x7C
      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

      -

      Deleting files

      -

      By default, rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.

      -

      Versions

      -

      Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.

      -

      Versioning can be disabled by --jottacloud-no-versions option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.

      -

      Quota information

      -

      To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

      -

      Advanced options

      -

      Here are the Advanced options specific to jottacloud (Jottacloud).

      -

      --jottacloud-md5-memory-limit

      -

      Files bigger than this will be cached on disk to calculate the MD5 if required.

      -

      Properties:

      -
        -
      • Config: md5_memory_limit
      • -
      • Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
      • -
      • Type: SizeSuffix
      • -
      • Default: 10Mi
      • -
      -

      --jottacloud-trashed-only

      -

      Only show files that are in the trash.

      -

      This will show trashed files in their original directory structure.

      -

      Properties:

      -
        -
      • Config: trashed_only
      • -
      • Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --jottacloud-hard-delete

      -

      Delete files permanently rather than putting them into the trash.

      -

      Properties:

      -
        -
      • Config: hard_delete
      • -
      • Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --jottacloud-upload-resume-limit

      -

      Files bigger than this can be resumed if the upload fail's.

      -

      Properties:

      -
        -
      • Config: upload_resume_limit
      • -
      • Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
      • -
      • Type: SizeSuffix
      • -
      • Default: 10Mi
      • -
      -

      --jottacloud-no-versions

      -

      Avoid server side versioning by deleting files and recreating files instead of overwriting them.

      -

      Properties:

      -
        -
      • Config: no_versions
      • -
      • Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --jottacloud-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_JOTTACLOUD_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      -

      There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.

      -

      Jottacloud only supports filenames up to 255 characters in length.

      -

      Troubleshooting

      -

      Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.

      -

      Koofr

      -

      Paths are specified as remote:path

      -

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      -

      The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

      -

      Here is an example of how to make a remote called koofr. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> koofr
      -Option Storage.
      -Type of storage to configure.
      -Choose a number from below, or type in your own value.
      -[snip]
      -22 / Koofr, Digi Storage and other Koofr-compatible storage providers
      -   \ (koofr)
      -[snip]
      -Storage> koofr
      -Option provider.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_INTERNETARCHIVE_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
      +
      +### Metadata
      +
      +Metadata fields provided by Internet Archive.
      +If there are multiple values for a key, only the first one is returned.
      +This is a limitation of Rclone, that supports one value per one key.
      +
      +Owner is able to add custom keys. Metadata feature grabs all the keys including them.
      +
      +Here are the possible system metadata items for the internetarchive backend.
      +
      +| Name | Help | Type | Example | Read Only |
      +|------|------|------|---------|-----------|
      +| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** |
      +| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** |
      +| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** |
      +| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** |
      +| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** |
      +| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** |
      +| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
      +| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
      +| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
      +| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** |
      +| size | File size in bytes | decimal number | 123456 | **Y** |
      +| source | The source of the file | string | original | **Y** |
      +| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** |
      +| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** |
      +
      +See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
      +
      +
      +
      +#  Jottacloud
      +
      +Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters
      +in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/),
      +it also provides white-label solutions to different companies, such as:
      +* Telia
      +  * Telia Cloud (cloud.telia.se)
      +  * Telia Sky (sky.telia.no)
      +* Tele2
      +  * Tele2 Cloud (mittcloud.tele2.se)
      +* Onlime
      +  * Onlime Cloud Storage (onlime.dk)
      +* Elkjøp (with subsidiaries):
      +  * Elkjøp Cloud (cloud.elkjop.no)
      +  * Elgiganten Sweden (cloud.elgiganten.se)
      +  * Elgiganten Denmark (cloud.elgiganten.dk)
      +  * Giganti Cloud  (cloud.gigantti.fi)
      +  * ELKO Cloud (cloud.elko.is)
      +
      +Most of the white-label versions are supported by this backend, although may require different
      +authentication setup - described below.
      +
      +Paths are specified as `remote:path`
      +
      +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
      +
      +## Authentication types
      +
      +Some of the whitelabel versions uses a different authentication method than the official service,
      +and you have to choose the correct one when setting up the remote.
      +
      +### Standard authentication
      +
      +The standard authentication method used by the official service (jottacloud.com), as well as
      +some of the whitelabel services, requires you to generate a single-use personal login token
      +from the account security settings in the service's web interface. Log in to your account,
      +go to "Settings" and then "Security", or use the direct link presented to you by rclone when
      +configuring the remote: <https://www.jottacloud.com/web/secure>. Scroll down to the section
      +"Personal login token", and click the "Generate" button. Note that if you are using a
      +whitelabel service you probably can't use the direct link, you need to find the same page in
      +their dedicated web interface, and also it may be in a different location than described above.
      +
      +To access your account from multiple instances of rclone, you need to configure each of them
      +with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one
      +location, and copy the configuration file to a second location where you also want to run
      +rclone and access the same remote. Then you need to replace the token for one of them, using
      +the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which
      +requires you to generate a new personal login token and supply as input. If you do not
      +do this, the token may easily end up being invalidated, resulting in both instances failing
      +with an error message something along the lines of:
      +
      +    oauth2: cannot fetch token: 400 Bad Request
      +    Response: {"error":"invalid_grant","error_description":"Stale token"}
      +
      +When this happens, you need to replace the token as described above to be able to use your
      +remote again.
      +
      +All personal login tokens you have taken into use will be listed in the web interface under
      +"My logged in devices", and from the right side of that list you can click the "X" button to
      +revoke individual tokens.
      +
      +### Legacy authentication
      +
      +If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option
      +to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select
      +yes when the setup asks for legacy authentication and enter your username and password.
      +The rest of the setup is identical to the default setup.
      +
      +### Telia Cloud authentication
      +
      +Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and
      +additionally uses a separate authentication flow where the username is generated internally. To setup
      +rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is
      +identical to the default setup.
      +
      +### Tele2 Cloud authentication
      +
      +As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and
      +Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate
      +authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud,
      +choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.
      +
      +### Onlime Cloud Storage authentication
      +
      +Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but
      +have recently set up their own hosting, transferring their customers from Jottacloud servers to their
      +own ones.
      +
      +This, of course, necessitates using their servers for authentication, but otherwise functionality and
      +architecture seems equivalent to Jottacloud.
      +
      +To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest
      +of the setup is identical to the default setup.
      +
      +## Configuration
      +
      +Here is an example of how to make a remote called `remote` with the default setup.  First run:
      +
      +    rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] XX / Jottacloud  (jottacloud) [snip] Storage> jottacloud Edit advanced config? y) Yes n) No (default) y/n> n Option config_type. Select authentication type. Choose a number from below, or type in an existing string value. Press Enter for the default (standard). / Standard authentication. 1 | Use this if you're a normal Jottacloud user.  (standard) / Legacy authentication. 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.  (legacy) / Telia Cloud authentication. 3 | Use this if you are using Telia Cloud.  (telia) / Tele2 Cloud authentication. 4 | Use this if you are using Tele2 Cloud.  (tele2) / Onlime Cloud authentication. 5 | Use this if you are using Onlime Cloud.  (onlime) config_type> 1 Personal login token. Generate here: https://www.jottacloud.com/web/secure Login Token> Use a non-standard device/mountpoint? Choosing no, the default, will let you access the storage used for the archive section of the official Jottacloud client. If you instead want to access the sync or the backup section, for example, you must choose yes. y) Yes n) No (default) y/n> y Option config_device. The device to use. In standard setup the built-in Jotta device is used, which contains predefined mountpoints for archive, sync etc. All other devices are treated as backup devices by the official Jottacloud client. You may create a new by entering a unique name. Choose a number from below, or type in your own string value. Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta config_device> 2 Option config_mountpoint. The mountpoint to use for the built-in device Jotta. The standard setup is to use the Archive mountpoint. Most other mountpoints have very limited support in rclone and should generally be avoided. Choose a number from below, or type in an existing string value. Press Enter for the default (Archive). 1 > Archive 2 > Shared 3 > Sync config_mountpoint> 1 -------------------- [remote] type = jottacloud configVersion = 1 client_id = jottacli client_secret = tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +Once configured you can then use `rclone` like this,
      +
      +List directories in top level of your Jottacloud
      +
      +    rclone lsd remote:
      +
      +List all the files in your Jottacloud
      +
      +    rclone ls remote:
      +
      +To copy a local directory to an Jottacloud directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Devices and Mountpoints
      +
      +The official Jottacloud client registers a device for each computer you install
      +it on, and shows them in the backup section of the user interface. For each
      +folder you select for backup it will create a mountpoint within this device.
      +A built-in device called Jotta is special, and contains mountpoints Archive,
      +Sync and some others, used for corresponding features in official clients.
      +
      +With rclone you'll want to use the standard Jotta/Archive device/mountpoint in
      +most cases. However, you may for example want to access files from the sync or
      +backup functionality provided by the official clients, and rclone therefore
      +provides the option to select other devices and mountpoints during config.
      +
      +You are allowed to create new devices and mountpoints. All devices except the
      +built-in Jotta device are treated as backup devices by official Jottacloud
      +clients, and the mountpoints on them are individual backup sets.
      +
      +With the built-in Jotta device, only existing, built-in, mountpoints can be
      +selected. In addition to the mentioned Archive and Sync, it may contain
      +several other mountpoints such as: Latest, Links, Shared and Trash. All of
      +these are special mountpoints with a different internal representation than
      +the "regular" mountpoints. Rclone will only to a very limited degree support
      +them. Generally you should avoid these, unless you know what you are doing.
      +
      +### --fast-list
      +
      +This remote supports `--fast-list` which allows you to use fewer
      +transactions in exchange for more memory. See the [rclone
      +docs](https://rclone.org/docs/#fast-list) for more details.
      +
      +Note that the implementation in Jottacloud always uses only a single
      +API request to get the entire list, so for large folders this could
      +lead to long wait time before the first results are shown.
      +
      +Note also that with rclone version 1.58 and newer information about
      +[MIME types](https://rclone.org/overview/#mime-type) are not available when using `--fast-list`.
      +
      +### Modified time and hashes
      +
      +Jottacloud allows modification times to be set on objects accurate to 1
      +second. These will be used to detect whether objects need syncing or
      +not.
      +
      +Jottacloud supports MD5 type hashes, so you can use the `--checksum`
      +flag.
      +
      +Note that Jottacloud requires the MD5 hash before upload so if the
      +source does not have an MD5 checksum then the file will be cached
      +temporarily on disk (in location given by
      +[--temp-dir](https://rclone.org/docs/#temp-dir-dir)) before it is uploaded.
      +Small files will be cached in memory - see the
      +[--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag.
      +When uploading from local disk the source checksum is always available,
      +so this does not apply. Starting with rclone version 1.52 the same is
      +true for encrypted remotes (in older versions the crypt backend would not
      +calculate hashes for uploads from local disk, so the Jottacloud
      +backend had to do it as described above).
      +
      +### Restricted filename characters
      +
      +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +the following characters are also replaced:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| "         | 0x22  | "          |
      +| *         | 0x2A  | *          |
      +| :         | 0x3A  | :          |
      +| <         | 0x3C  | <          |
      +| >         | 0x3E  | >          |
      +| ?         | 0x3F  | ?          |
      +| \|        | 0x7C  | |          |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in XML strings.
      +
      +### Deleting files
      +
      +By default, rclone will send all files to the trash when deleting files. They will be permanently
      +deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately
      +by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable.
      +Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command.
      +
      +### Versions
      +
      +Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it.
      +Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
      +
      +Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading
      +a new version. If the upload the fails no version of the file will be available in the remote.
      +
      +### Quota information
      +
      +To view your current quota you can use the `rclone about remote:`
      +command which will display your usage limit (unless it is unlimited)
      +and the current usage.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to jottacloud (Jottacloud).
      +
      +#### --jottacloud-client-id
      +
      +OAuth Client Id.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_id
      +- Env Var:     RCLONE_JOTTACLOUD_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --jottacloud-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_JOTTACLOUD_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to jottacloud (Jottacloud).
      +
      +#### --jottacloud-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_JOTTACLOUD_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --jottacloud-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_JOTTACLOUD_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --jottacloud-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_JOTTACLOUD_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --jottacloud-md5-memory-limit
      +
      +Files bigger than this will be cached on disk to calculate the MD5 if required.
      +
      +Properties:
      +
      +- Config:      md5_memory_limit
      +- Env Var:     RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
      +- Type:        SizeSuffix
      +- Default:     10Mi
      +
      +#### --jottacloud-trashed-only
      +
      +Only show files that are in the trash.
      +
      +This will show trashed files in their original directory structure.
      +
      +Properties:
      +
      +- Config:      trashed_only
      +- Env Var:     RCLONE_JOTTACLOUD_TRASHED_ONLY
      +- Type:        bool
      +- Default:     false
      +
      +#### --jottacloud-hard-delete
      +
      +Delete files permanently rather than putting them into the trash.
      +
      +Properties:
      +
      +- Config:      hard_delete
      +- Env Var:     RCLONE_JOTTACLOUD_HARD_DELETE
      +- Type:        bool
      +- Default:     false
      +
      +#### --jottacloud-upload-resume-limit
      +
      +Files bigger than this can be resumed if the upload fail's.
      +
      +Properties:
      +
      +- Config:      upload_resume_limit
      +- Env Var:     RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
      +- Type:        SizeSuffix
      +- Default:     10Mi
      +
      +#### --jottacloud-no-versions
      +
      +Avoid server side versioning by deleting files and recreating files instead of overwriting them.
      +
      +Properties:
      +
      +- Config:      no_versions
      +- Env Var:     RCLONE_JOTTACLOUD_NO_VERSIONS
      +- Type:        bool
      +- Default:     false
      +
      +#### --jottacloud-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_JOTTACLOUD_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +Note that Jottacloud is case insensitive so you can't have a file called
      +"Hello.doc" and one called "hello.doc".
      +
      +There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical
      +looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
      +
      +Jottacloud only supports filenames up to 255 characters in length.
      +
      +## Troubleshooting
      +
      +Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove
      +operations to previously deleted paths to fail. Emptying the trash should help in such cases.
      +
      +#  Koofr
      +
      +Paths are specified as `remote:path`
      +
      +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
      +
      +## Configuration
      +
      +The initial setup for Koofr involves creating an application password for
      +rclone. You can do that by opening the Koofr
      +[web application](https://app.koofr.net/app/admin/preferences/password),
      +giving the password a nice name like `rclone` and clicking on generate.
      +
      +Here is an example of how to make a remote called `koofr`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> koofr Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage providers  (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/  (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 / Any other Koofr API compatible storage service  (other) provider> 1
      +Option user. Your user name. Enter a value. user> USERNAME Option password. Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n Remote config -------------------- [koofr] type = koofr provider = koofr user = USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +You can choose to edit advanced config in order to enter your own service URL
      +if you use an on-premise or white label Koofr instance, or choose an alternative
      +mount instead of your primary storage.
      +
      +Once configured you can then use `rclone` like this,
      +
      +List directories in top level of your Koofr
      +
      +    rclone lsd koofr:
      +
      +List all the files in your Koofr
      +
      +    rclone ls koofr:
      +
      +To copy a local directory to an Koofr directory called backup
      +
      +    rclone copy /home/source koofr:backup
      +
      +### Restricted filename characters
      +
      +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +the following characters are also replaced:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| \         | 0x5C  | \           |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in XML strings.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
      +
      +#### --koofr-provider
      +
       Choose your storage provider.
      -Choose a number from below, or type in your own value.
      -Press Enter to leave empty.
      - 1 / Koofr, https://app.koofr.net/
      -   \ (koofr)
      - 2 / Digi Storage, https://storage.rcs-rds.ro/
      -   \ (digistorage)
      - 3 / Any other Koofr API compatible storage service
      -   \ (other)
      -provider> 1    
      -Option user.
      -Your user name.
      -Enter a value.
      -user> USERNAME
      -Option password.
      -Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
      -Choose an alternative below.
      -y) Yes, type in my own password
      -g) Generate random password
      -y/g> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> n
      -Remote config
      ---------------------
      -[koofr]
      -type = koofr
      -provider = koofr
      -user = USERNAME
      -password = *** ENCRYPTED ***
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage.

      -

      Once configured you can then use rclone like this,

      -

      List directories in top level of your Koofr

      -
      rclone lsd koofr:
      -

      List all the files in your Koofr

      -
      rclone ls koofr:
      -

      To copy a local directory to an Koofr directory called backup

      -
      rclone copy /home/source koofr:backup
      -

      Restricted filename characters

      -

      In addition to the default restricted characters set the following characters are also replaced:

      - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      \0x5C
      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

      -

      Standard options

      -

      Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

      -

      --koofr-provider

      -

      Choose your storage provider.

      -

      Properties:

      -
        -
      • Config: provider
      • -
      • Env Var: RCLONE_KOOFR_PROVIDER
      • -
      • Type: string
      • -
      • Required: false
      • -
      • Examples: -
          -
        • "koofr" -
            -
          • Koofr, https://app.koofr.net/
          • -
        • -
        • "digistorage" -
            -
          • Digi Storage, https://storage.rcs-rds.ro/
          • -
        • -
        • "other" -
            -
          • Any other Koofr API compatible storage service
          • -
        • -
      • -
      -

      --koofr-endpoint

      -

      The Koofr API endpoint to use.

      -

      Properties:

      -
        -
      • Config: endpoint
      • -
      • Env Var: RCLONE_KOOFR_ENDPOINT
      • -
      • Provider: other
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --koofr-user

      -

      Your user name.

      -

      Properties:

      -
        -
      • Config: user
      • -
      • Env Var: RCLONE_KOOFR_USER
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --koofr-password

      -

      Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: password
      • -
      • Env Var: RCLONE_KOOFR_PASSWORD
      • -
      • Provider: koofr
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --koofr-password

      -

      Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: password
      • -
      • Env Var: RCLONE_KOOFR_PASSWORD
      • -
      • Provider: digistorage
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --koofr-password

      -

      Your password for rclone (generate one at your service's settings page).

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: password
      • -
      • Env Var: RCLONE_KOOFR_PASSWORD
      • -
      • Provider: other
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

      -

      --koofr-mountid

      -

      Mount ID of the mount to use.

      -

      If omitted, the primary mount is used.

      -

      Properties:

      -
        -
      • Config: mountid
      • -
      • Env Var: RCLONE_KOOFR_MOUNTID
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --koofr-setmtime

      -

      Does the backend support setting modification time.

      -

      Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

      -

      Properties:

      -
        -
      • Config: setmtime
      • -
      • Env Var: RCLONE_KOOFR_SETMTIME
      • -
      • Type: bool
      • -
      • Default: true
      • -
      -

      --koofr-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_KOOFR_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      -

      Providers

      -

      Koofr

      -

      This is the original Koofr storage provider used as main example and described in the configuration section above.

      -

      Digi Storage

      -

      Digi Storage is a cloud storage service run by Digi.ro that provides a Koofr API.

      -

      Here is an example of how to make a remote called ds. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> ds
      -Option Storage.
      -Type of storage to configure.
      -Choose a number from below, or type in your own value.
      -[snip]
      -22 / Koofr, Digi Storage and other Koofr-compatible storage providers
      -   \ (koofr)
      -[snip]
      -Storage> koofr
      -Option provider.
      -Choose your storage provider.
      -Choose a number from below, or type in your own value.
      -Press Enter to leave empty.
      - 1 / Koofr, https://app.koofr.net/
      -   \ (koofr)
      - 2 / Digi Storage, https://storage.rcs-rds.ro/
      -   \ (digistorage)
      - 3 / Any other Koofr API compatible storage service
      -   \ (other)
      -provider> 2
      -Option user.
      -Your user name.
      -Enter a value.
      -user> USERNAME
      -Option password.
      -Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
      -Choose an alternative below.
      -y) Yes, type in my own password
      -g) Generate random password
      -y/g> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> n
      ---------------------
      -[ds]
      -type = koofr
      -provider = digistorage
      -user = USERNAME
      -password = *** ENCRYPTED ***
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Other

      -

      You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to.

      -

      Here is an example of how to make a remote called other. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> other
      -Option Storage.
      -Type of storage to configure.
      -Choose a number from below, or type in your own value.
      -[snip]
      -22 / Koofr, Digi Storage and other Koofr-compatible storage providers
      -   \ (koofr)
      -[snip]
      -Storage> koofr
      -Option provider.
      -Choose your storage provider.
      -Choose a number from below, or type in your own value.
      -Press Enter to leave empty.
      - 1 / Koofr, https://app.koofr.net/
      -   \ (koofr)
      - 2 / Digi Storage, https://storage.rcs-rds.ro/
      -   \ (digistorage)
      - 3 / Any other Koofr API compatible storage service
      -   \ (other)
      -provider> 3
      -Option endpoint.
      +
      +Properties:
      +
      +- Config:      provider
      +- Env Var:     RCLONE_KOOFR_PROVIDER
      +- Type:        string
      +- Required:    false
      +- Examples:
      +    - "koofr"
      +        - Koofr, https://app.koofr.net/
      +    - "digistorage"
      +        - Digi Storage, https://storage.rcs-rds.ro/
      +    - "other"
      +        - Any other Koofr API compatible storage service
      +
      +#### --koofr-endpoint
      +
       The Koofr API endpoint to use.
      -Enter a value.
      -endpoint> https://koofr.other.org
      -Option user.
      +
      +Properties:
      +
      +- Config:      endpoint
      +- Env Var:     RCLONE_KOOFR_ENDPOINT
      +- Provider:    other
      +- Type:        string
      +- Required:    true
      +
      +#### --koofr-user
      +
       Your user name.
      -Enter a value.
      -user> USERNAME
      -Option password.
      +
      +Properties:
      +
      +- Config:      user
      +- Env Var:     RCLONE_KOOFR_USER
      +- Type:        string
      +- Required:    true
      +
      +#### --koofr-password
      +
      +Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
      +
      +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
      +
      +Properties:
      +
      +- Config:      password
      +- Env Var:     RCLONE_KOOFR_PASSWORD
      +- Provider:    koofr
      +- Type:        string
      +- Required:    true
      +
      +#### --koofr-password
      +
      +Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
      +
      +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
      +
      +Properties:
      +
      +- Config:      password
      +- Env Var:     RCLONE_KOOFR_PASSWORD
      +- Provider:    digistorage
      +- Type:        string
      +- Required:    true
      +
      +#### --koofr-password
      +
       Your password for rclone (generate one at your service's settings page).
      -Choose an alternative below.
      -y) Yes, type in my own password
      -g) Generate random password
      -y/g> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      -Edit advanced config?
      -y) Yes
      -n) No (default)
      -y/n> n
      ---------------------
      -[other]
      -type = koofr
      -provider = other
      -endpoint = https://koofr.other.org
      -user = USERNAME
      -password = *** ENCRYPTED ***
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Mail.ru Cloud

      -

      Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available on Windows and Mac OS.

      -

      Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.

      -

      Features highlights

      -
        -
      • Paths may be as deep as required, e.g. remote:directory/subdirectory
      • -
      • Files have a last modified time property, directories don't
      • -
      • Deleted files are by default moved to the trash
      • -
      • Files and directories can be shared via public links
      • -
      • Partial uploads or streaming are not supported, file size must be known before upload
      • -
      • Maximum file size is limited to 2G for a free account, unlimited for paid accounts
      • -
      • Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
      • -
      • If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
      • -
      -

      Configuration

      -

      Here is an example of making a mailru configuration.

      -

      First create a Mail.ru Cloud account and choose a tariff.

      -

      You will need to log in and create an app password for rclone. Rclone will not work with your normal username and password - it will give an error like oauth2: server response missing access_token.

      -
        -
      • Click on your user icon in the top right
      • -
      • Go to Security / "Пароль и безопасность"
      • -
      • Click password for apps / "Пароли для внешних приложений"
      • -
      • Add the password - give it a name - eg "rclone"
      • -
      • Copy the password and use this password below - your normal login password won't work.
      • -
      -

      Now run

      -
      rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Mail.ru Cloud
      -   \ "mailru"
      -[snip]
      -Storage> mailru
      -User name (usually email)
      -Enter a string value. Press Enter for the default ("").
      -user> username@mail.ru
      -Password
      +
      +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
      +
      +Properties:
      +
      +- Config:      password
      +- Env Var:     RCLONE_KOOFR_PASSWORD
      +- Provider:    other
      +- Type:        string
      +- Required:    true
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
      +
      +#### --koofr-mountid
      +
      +Mount ID of the mount to use.
      +
      +If omitted, the primary mount is used.
      +
      +Properties:
      +
      +- Config:      mountid
      +- Env Var:     RCLONE_KOOFR_MOUNTID
      +- Type:        string
      +- Required:    false
      +
      +#### --koofr-setmtime
      +
      +Does the backend support setting modification time.
      +
      +Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
      +
      +Properties:
      +
      +- Config:      setmtime
      +- Env Var:     RCLONE_KOOFR_SETMTIME
      +- Type:        bool
      +- Default:     true
      +
      +#### --koofr-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_KOOFR_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +Note that Koofr is case insensitive so you can't have a file called
      +"Hello.doc" and one called "hello.doc".
      +
      +## Providers
      +
      +### Koofr
      +
      +This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above.
      +
      +### Digi Storage 
      +
      +[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that
      +provides a Koofr API.
      +
      +Here is an example of how to make a remote called `ds`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> ds Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage providers  (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/  (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 / Any other Koofr API compatible storage service  (other) provider> 2 Option user. Your user name. Enter a value. user> USERNAME Option password. Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password). Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n -------------------- [ds] type = koofr provider = digistorage user = USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +### Other
      +
      +You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to.
      +
      +Here is an example of how to make a remote called `other`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> other Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage providers  (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/  (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 / Any other Koofr API compatible storage service  (other) provider> 3 Option endpoint. The Koofr API endpoint to use. Enter a value. endpoint> https://koofr.other.org Option user. Your user name. Enter a value. user> USERNAME Option password. Your password for rclone (generate one at your service's settings page). Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n -------------------- [other] type = koofr provider = other endpoint = https://koofr.other.org user = USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +#  Mail.ru Cloud
      +
      +[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
      +
      +## Features highlights
      +
      +- Paths may be as deep as required, e.g. `remote:directory/subdirectory`
      +- Files have a `last modified time` property, directories don't
      +- Deleted files are by default moved to the trash
      +- Files and directories can be shared via public links
      +- Partial uploads or streaming are not supported, file size must be known before upload
      +- Maximum file size is limited to 2G for a free account, unlimited for paid accounts
      +- Storage keeps hash for all files and performs transparent deduplication,
      +  the hash algorithm is a modified SHA1
      +- If a particular file is already present in storage, one can quickly submit file hash
      +  instead of long file upload (this optimization is supported by rclone)
      +
      +## Configuration
      +
      +Here is an example of making a mailru configuration.
      +
      +First create a Mail.ru Cloud account and choose a tariff.
      +
      +You will need to log in and create an app password for rclone. Rclone
      +**will not work** with your normal username and password - it will
      +give an error like `oauth2: server response missing access_token`.
      +
      +- Click on your user icon in the top right
      +- Go to Security / "Пароль и безопасность"
      +- Click password for apps / "Пароли для внешних приложений"
      +- Add the password - give it a name - eg "rclone"
      +- Copy the password and use this password below - your normal login password won't work.
      +
      +Now run
      +
      +    rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Mail.ru Cloud  "mailru" [snip] Storage> mailru User name (usually email) Enter a string value. Press Enter for the default (""). user> username@mail.ru Password

      +

      This must be an app password - rclone will not work with your normal password. See the Configuration section in the docs for how to make an app password. y) Yes type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips [snip] Enter a boolean value (true or false). Press Enter for the default ("true"). Choose a number from below, or type in your own value 1 / Enable  "true" 2 / Disable  "false" speedup_enable> 1 Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config -------------------- [remote] type = mailru user = username@mail.ru pass = *** ENCRYPTED *** speedup_enable = true -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +Configuration of this backend does not require a local web browser.
      +You can use the configured backend as shown below:
      +
      +See top level directories
      +
      +    rclone lsd remote:
      +
      +Make a new directory
      +
      +    rclone mkdir remote:directory
      +
      +List the contents of a directory
      +
      +    rclone ls remote:directory
      +
      +Sync `/home/local/directory` to the remote path, deleting any
      +excess files in the path.
      +
      +    rclone sync --interactive /home/local/directory remote:directory
      +
      +### Modified time
      +
      +Files support a modification time attribute with up to 1 second precision.
      +Directories do not have a modification time, which is shown as "Jan 1 1970".
      +
      +### Hash checksums
      +
      +Hash sums use a custom Mail.ru algorithm based on SHA1.
      +If file size is less than or equal to the SHA1 block size (20 bytes),
      +its hash is simply its data right-padded with zero bytes.
      +Hash sum of a larger file is computed as a SHA1 sum of the file data
      +bytes concatenated with a decimal representation of the data length.
      +
      +### Emptying Trash
      +
      +Removing a file or directory actually moves it to the trash, which is not
      +visible to rclone but can be seen in a web browser. The trashed file
      +still occupies part of total quota. If you wish to empty your trash
      +and free some quota, you can use the `rclone cleanup remote:` command,
      +which will permanently delete all your trashed files.
      +This command does not take any path arguments.
      +
      +### Quota information
      +
      +To view your current quota you can use the `rclone about remote:`
      +command which will display your usage limit (quota) and the current usage.
      +
      +### Restricted filename characters
      +
      +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
      +the following characters are also replaced:
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| "         | 0x22  | "          |
      +| *         | 0x2A  | *          |
      +| :         | 0x3A  | :          |
      +| <         | 0x3C  | <          |
      +| >         | 0x3E  | >          |
      +| ?         | 0x3F  | ?          |
      +| \         | 0x5C  | \          |
      +| \|        | 0x7C  | |          |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to mailru (Mail.ru Cloud).
      +
      +#### --mailru-client-id
      +
      +OAuth Client Id.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_id
      +- Env Var:     RCLONE_MAILRU_CLIENT_ID
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-client-secret
      +
      +OAuth Client Secret.
      +
      +Leave blank normally.
      +
      +Properties:
      +
      +- Config:      client_secret
      +- Env Var:     RCLONE_MAILRU_CLIENT_SECRET
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-user
      +
      +User name (usually email).
      +
      +Properties:
      +
      +- Config:      user
      +- Env Var:     RCLONE_MAILRU_USER
      +- Type:        string
      +- Required:    true
      +
      +#### --mailru-pass
      +
      +Password.
       
       This must be an app password - rclone will not work with your normal
       password. See the Configuration section in the docs for how to make an
       app password.
      -y) Yes type in my own password
      -g) Generate random password
      -y/g> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      +
      +
      +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
      +
      +Properties:
      +
      +- Config:      pass
      +- Env Var:     RCLONE_MAILRU_PASS
      +- Type:        string
      +- Required:    true
      +
      +#### --mailru-speedup-enable
      +
       Skip full upload if there is another file with same data hash.
      +
       This feature is called "speedup" or "put by hash". It is especially efficient
      -in case of generally available files like popular books, video or audio clips
      -[snip]
      -Enter a boolean value (true or false). Press Enter for the default ("true").
      -Choose a number from below, or type in your own value
      - 1 / Enable
      -   \ "true"
      - 2 / Disable
      -   \ "false"
      -speedup_enable> 1
      -Edit advanced config? (y/n)
      -y) Yes
      -n) No
      -y/n> n
      -Remote config
      ---------------------
      -[remote]
      -type = mailru
      -user = username@mail.ru
      -pass = *** ENCRYPTED ***
      -speedup_enable = true
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Configuration of this backend does not require a local web browser. You can use the configured backend as shown below:

      -

      See top level directories

      -
      rclone lsd remote:
      -

      Make a new directory

      -
      rclone mkdir remote:directory
      -

      List the contents of a directory

      -
      rclone ls remote:directory
      -

      Sync /home/local/directory to the remote path, deleting any excess files in the path.

      -
      rclone sync --interactive /home/local/directory remote:directory
      -

      Modified time

      -

      Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".

      -

      Hash checksums

      -

      Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.

      -

      Emptying Trash

      -

      Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote: command, which will permanently delete all your trashed files. This command does not take any path arguments.

      -

      Quota information

      -

      To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

      -

      Restricted filename characters

      -

      In addition to the default restricted characters set the following characters are also replaced:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      CharacterValueReplacement
      "0x22
      *0x2A
      :0x3A
      <0x3C
      >0x3E
      ?0x3F
      \0x5C
      |0x7C
      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      -

      Here are the Standard options specific to mailru (Mail.ru Cloud).

      -

      --mailru-user

      -

      User name (usually email).

      -

      Properties:

      -
        -
      • Config: user
      • -
      • Env Var: RCLONE_MAILRU_USER
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --mailru-pass

      -

      Password.

      -

      This must be an app password - rclone will not work with your normal password. See the Configuration section in the docs for how to make an app password.

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: pass
      • -
      • Env Var: RCLONE_MAILRU_PASS
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --mailru-speedup-enable

      -

      Skip full upload if there is another file with same data hash.

      -

      This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.

      -

      Properties:

      -
        -
      • Config: speedup_enable
      • -
      • Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE
      • -
      • Type: bool
      • -
      • Default: true
      • -
      • Examples: -
          -
        • "true" -
            -
          • Enable
          • -
        • -
        • "false" -
            -
          • Disable
          • -
        • -
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to mailru (Mail.ru Cloud).

      -

      --mailru-speedup-file-patterns

      -

      Comma separated list of file name patterns eligible for speedup (put by hash).

      -

      Patterns are case insensitive and can contain '*' or '?' meta characters.

      -

      Properties:

      -
        -
      • Config: speedup_file_patterns
      • -
      • Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS
      • -
      • Type: string
      • -
      • Default: ".mkv,.avi,.mp4,.mp3,.zip,.gz,.rar,.pdf"
      • -
      • Examples: -
          -
        • "" -
            -
          • Empty list completely disables speedup (put by hash).
          • -
        • -
        • "*" -
            -
          • All files will be attempted for speedup.
          • -
        • -
        • ".mkv,.avi,.mp4,.mp3" -
            -
          • Only common audio/video files will be tried for put by hash.
          • -
        • -
        • ".zip,.gz,.rar,.pdf" -
            -
          • Only common archives or PDF books will be tried for speedup.
          • -
        • -
      • -
      -

      --mailru-speedup-max-disk

      -

      This option allows you to disable speedup (put by hash) for large files.

      -

      Reason is that preliminary hashing can exhaust your RAM or disk space.

      -

      Properties:

      -
        -
      • Config: speedup_max_disk
      • -
      • Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
      • -
      • Type: SizeSuffix
      • -
      • Default: 3Gi
      • -
      • Examples: -
          -
        • "0" -
            -
          • Completely disable speedup (put by hash).
          • -
        • -
        • "1G" -
            -
          • Files larger than 1Gb will be uploaded directly.
          • -
        • -
        • "3G" -
            -
          • Choose this option if you have less than 3Gb free on local disk.
          • -
        • -
      • -
      -

      --mailru-speedup-max-memory

      -

      Files larger than the size given below will always be hashed on disk.

      -

      Properties:

      -
        -
      • Config: speedup_max_memory
      • -
      • Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
      • -
      • Type: SizeSuffix
      • -
      • Default: 32Mi
      • -
      • Examples: -
          -
        • "0" -
            -
          • Preliminary hashing will always be done in a temporary disk location.
          • -
        • -
        • "32M" -
            -
          • Do not dedicate more than 32Mb RAM for preliminary hashing.
          • -
        • -
        • "256M" -
            -
          • You have at most 256Mb RAM free for hash calculations.
          • -
        • -
      • -
      -

      --mailru-check-hash

      -

      What should copy do if file checksum is mismatched or invalid.

      -

      Properties:

      -
        -
      • Config: check_hash
      • -
      • Env Var: RCLONE_MAILRU_CHECK_HASH
      • -
      • Type: bool
      • -
      • Default: true
      • -
      • Examples: -
          -
        • "true" -
            -
          • Fail with error.
          • -
        • -
        • "false" -
            -
          • Ignore and continue.
          • -
        • -
      • -
      -

      --mailru-user-agent

      -

      HTTP user agent used internally by client.

      -

      Defaults to "rclone/VERSION" or "--user-agent" provided on command line.

      -

      Properties:

      -
        -
      • Config: user_agent
      • -
      • Env Var: RCLONE_MAILRU_USER_AGENT
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --mailru-quirks

      -

      Comma separated list of internal maintenance flags.

      -

      This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs

      -

      Properties:

      -
        -
      • Config: quirks
      • -
      • Env Var: RCLONE_MAILRU_QUIRKS
      • -
      • Type: string
      • -
      • Required: false
      • -
      -

      --mailru-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_MAILRU_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

      -

      Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      -

      Mega

      -

      Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

      -

      This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

      -

      Paths are specified as remote:path

      -

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      -

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      -

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Mega
      -   \ "mega"
      -[snip]
      -Storage> mega
      -User name
      -user> you@example.com
      +in case of generally available files like popular books, video or audio clips,
      +because files are searched by hash in all accounts of all mailru users.
      +It is meaningless and ineffective if source file is unique or encrypted.
      +Please note that rclone may need local memory and disk space to calculate
      +content hash in advance and decide whether full upload is required.
      +Also, if rclone does not know file size in advance (e.g. in case of
      +streaming or partial uploads), it will not even try this optimization.
      +
      +Properties:
      +
      +- Config:      speedup_enable
      +- Env Var:     RCLONE_MAILRU_SPEEDUP_ENABLE
      +- Type:        bool
      +- Default:     true
      +- Examples:
      +    - "true"
      +        - Enable
      +    - "false"
      +        - Disable
      +
      +### Advanced options
      +
      +Here are the Advanced options specific to mailru (Mail.ru Cloud).
      +
      +#### --mailru-token
      +
      +OAuth Access Token as a JSON blob.
      +
      +Properties:
      +
      +- Config:      token
      +- Env Var:     RCLONE_MAILRU_TOKEN
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-auth-url
      +
      +Auth server URL.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      auth_url
      +- Env Var:     RCLONE_MAILRU_AUTH_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-token-url
      +
      +Token server url.
      +
      +Leave blank to use the provider defaults.
      +
      +Properties:
      +
      +- Config:      token_url
      +- Env Var:     RCLONE_MAILRU_TOKEN_URL
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-speedup-file-patterns
      +
      +Comma separated list of file name patterns eligible for speedup (put by hash).
      +
      +Patterns are case insensitive and can contain '*' or '?' meta characters.
      +
      +Properties:
      +
      +- Config:      speedup_file_patterns
      +- Env Var:     RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS
      +- Type:        string
      +- Default:     "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf"
      +- Examples:
      +    - ""
      +        - Empty list completely disables speedup (put by hash).
      +    - "*"
      +        - All files will be attempted for speedup.
      +    - "*.mkv,*.avi,*.mp4,*.mp3"
      +        - Only common audio/video files will be tried for put by hash.
      +    - "*.zip,*.gz,*.rar,*.pdf"
      +        - Only common archives or PDF books will be tried for speedup.
      +
      +#### --mailru-speedup-max-disk
      +
      +This option allows you to disable speedup (put by hash) for large files.
      +
      +Reason is that preliminary hashing can exhaust your RAM or disk space.
      +
      +Properties:
      +
      +- Config:      speedup_max_disk
      +- Env Var:     RCLONE_MAILRU_SPEEDUP_MAX_DISK
      +- Type:        SizeSuffix
      +- Default:     3Gi
      +- Examples:
      +    - "0"
      +        - Completely disable speedup (put by hash).
      +    - "1G"
      +        - Files larger than 1Gb will be uploaded directly.
      +    - "3G"
      +        - Choose this option if you have less than 3Gb free on local disk.
      +
      +#### --mailru-speedup-max-memory
      +
      +Files larger than the size given below will always be hashed on disk.
      +
      +Properties:
      +
      +- Config:      speedup_max_memory
      +- Env Var:     RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
      +- Type:        SizeSuffix
      +- Default:     32Mi
      +- Examples:
      +    - "0"
      +        - Preliminary hashing will always be done in a temporary disk location.
      +    - "32M"
      +        - Do not dedicate more than 32Mb RAM for preliminary hashing.
      +    - "256M"
      +        - You have at most 256Mb RAM free for hash calculations.
      +
      +#### --mailru-check-hash
      +
      +What should copy do if file checksum is mismatched or invalid.
      +
      +Properties:
      +
      +- Config:      check_hash
      +- Env Var:     RCLONE_MAILRU_CHECK_HASH
      +- Type:        bool
      +- Default:     true
      +- Examples:
      +    - "true"
      +        - Fail with error.
      +    - "false"
      +        - Ignore and continue.
      +
      +#### --mailru-user-agent
      +
      +HTTP user agent used internally by client.
      +
      +Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
      +
      +Properties:
      +
      +- Config:      user_agent
      +- Env Var:     RCLONE_MAILRU_USER_AGENT
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-quirks
      +
      +Comma separated list of internal maintenance flags.
      +
      +This option must not be used by an ordinary user. It is intended only to
      +facilitate remote troubleshooting of backend issues. Strict meaning of
      +flags is not documented and not guaranteed to persist between releases.
      +Quirks will be removed when the backend grows stable.
      +Supported quirks: atomicmkdir binlist unknowndirs
      +
      +Properties:
      +
      +- Config:      quirks
      +- Env Var:     RCLONE_MAILRU_QUIRKS
      +- Type:        string
      +- Required:    false
      +
      +#### --mailru-encoding
      +
      +The encoding for the backend.
      +
      +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
      +
      +Properties:
      +
      +- Config:      encoding
      +- Env Var:     RCLONE_MAILRU_ENCODING
      +- Type:        MultiEncoder
      +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
      +
      +
      +
      +## Limitations
      +
      +File size limits depend on your account. A single file size is limited by 2G
      +for a free account and unlimited for paid tariffs. Please refer to the Mail.ru
      +site for the total uploaded size limits.
      +
      +Note that Mailru is case insensitive so you can't have a file called
      +"Hello.doc" and one called "hello.doc".
      +
      +#  Mega
      +
      +[Mega](https://mega.nz/) is a cloud storage and file hosting service
      +known for its security feature where all files are encrypted locally
      +before they are uploaded. This prevents anyone (including employees of
      +Mega) from accessing the files without knowledge of the key used for
      +encryption.
      +
      +This is an rclone backend for Mega which supports the file transfer
      +features of Mega using the same client side encryption.
      +
      +Paths are specified as `remote:path`
      +
      +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
      +
      +## Configuration
      +
      +Here is an example of how to make a remote called `remote`.  First run:
      +
      +     rclone config
      +
      +This will guide you through an interactive setup process:
      +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Mega  "mega" [snip] Storage> mega User name user> you@example.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Remote config -------------------- [remote] type = mega user = you@example.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

      +
      
      +**NOTE:** The encryption keys need to have been already generated after a regular login
      +via the browser, otherwise attempting to use the credentials in `rclone` will fail.
      +
      +Once configured you can then use `rclone` like this,
      +
      +List directories in top level of your Mega
      +
      +    rclone lsd remote:
      +
      +List all the files in your Mega
      +
      +    rclone ls remote:
      +
      +To copy a local directory to an Mega directory called backup
      +
      +    rclone copy /home/source remote:backup
      +
      +### Modified time and hashes
      +
      +Mega does not support modification times or hashes yet.
      +
      +### Restricted filename characters
      +
      +| Character | Value | Replacement |
      +| --------- |:-----:|:-----------:|
      +| NUL       | 0x00  | ␀           |
      +| /         | 0x2F  | /          |
      +
      +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
      +as they can't be used in JSON strings.
      +
      +### Duplicated files
      +
      +Mega can have two files with exactly the same name and path (unlike a
      +normal file system).
      +
      +Duplicated files cause problems with the syncing and you will see
      +messages in the log about duplicates.
      +
      +Use `rclone dedupe` to fix duplicated files.
      +
      +### Failure to log-in
      +
      +#### Object not found
      +
      +If you are connecting to your Mega remote for the first time, 
      +to test access and synchronization, you may receive an error such as 
      +
      +

      Failed to create file system for "my-mega-remote:": couldn't login: Object (typically, node or user) not found

      +
      
      +The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega)
      +start with the **MEGAcmd** utility. Note that this refers to 
      +the official C++ command from https://github.com/meganz/MEGAcmd 
      +and not the go language built command from t3rm1n4l/megacmd 
      +that is no longer maintained. 
      +
      +Follow the instructions for installing MEGAcmd and try accessing 
      +your remote as they recommend. You can establish whether or not 
      +you can log in using MEGAcmd, and obtain diagnostic information 
      +to help you, and search or work with others in the forum. 
      +
      +

      MEGA CMD> login me@example.com Password: Fetching nodes ... Loading transfers from local cache Login complete as me@example.com me@example.com:/$

      +
      
      +Note that some have found issues with passwords containing special 
      +characters. If you can not log on with rclone, but MEGAcmd logs on 
      +just fine, then consider changing your password temporarily to 
      +pure alphanumeric characters, in case that helps.
      +
      +
      +#### Repeated commands blocks access
      +
      +Mega remotes seem to get blocked (reject logins) under "heavy use".
      +We haven't worked out the exact blocking rules but it seems to be
      +related to fast paced, successive rclone commands.
      +
      +For example, executing this command 90 times in a row `rclone link
      +remote:file` will cause the remote to become "blocked". This is not an
      +abnormal situation, for example if you wish to get the public links of
      +a directory with hundred of files...  After more or less a week, the
      +remote will remote accept rclone logins normally again.
      +
      +You can mitigate this issue by mounting the remote it with `rclone
      +mount`. This will log-in when mounting and a log-out when unmounting
      +only. You can also run `rclone rcd` and then use `rclone rc` to run
      +the commands over the API to avoid logging in each time.
      +
      +Rclone does not currently close mega sessions (you can see them in the
      +web interface), however closing the sessions does not solve the issue.
      +
      +If you space rclone commands by 3 seconds it will avoid blocking the
      +remote. We haven't identified the exact blocking rules, so perhaps one
      +could execute the command 80 times without waiting and avoid blocking
      +by waiting 3 seconds, then continuing...
      +
      +Note that this has been observed by trial and error and might not be
      +set in stone.
      +
      +Other tools seem not to produce this blocking effect, as they use a
      +different working approach (state-based, using sessionIDs instead of
      +log-in) which isn't compatible with the current stateless rclone
      +approach.
      +
      +Note that once blocked, the use of other tools (such as megacmd) is
      +not a sure workaround: following megacmd login times have been
      +observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30
      +min, 30min. Web access looks unaffected though.
      +
      +Investigation is continuing in relation to workarounds based on
      +timeouts, pacers, retrials and tpslimits - if you discover something
      +relevant, please post on the forum.
      +
      +So, if rclone was working nicely and suddenly you are unable to log-in
      +and you are sure the user and the password are correct, likely you
      +have got the remote blocked for a while.
      +
      +
      +### Standard options
      +
      +Here are the Standard options specific to mega (Mega).
      +
      +#### --mega-user
      +
      +User name.
      +
      +Properties:
      +
      +- Config:      user
      +- Env Var:     RCLONE_MEGA_USER
      +- Type:        string
      +- Required:    true
      +
      +#### --mega-pass
      +
       Password.
      -y) Yes type in my own password
      -g) Generate random password
      -n) No leave this optional password blank
      -y/g/n> y
      -Enter the password:
      -password:
      -Confirm the password:
      -password:
      -Remote config
      ---------------------
      -[remote]
      -type = mega
      -user = you@example.com
      -pass = *** ENCRYPTED ***
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      NOTE: The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

      -

      Once configured you can then use rclone like this,

      -

      List directories in top level of your Mega

      -
      rclone lsd remote:
      -

      List all the files in your Mega

      -
      rclone ls remote:
      -

      To copy a local directory to an Mega directory called backup

      -
      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      -

      Mega does not support modification times or hashes yet.

      -

      Restricted filename characters

      + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: pass +- Env Var: RCLONE_MEGA_PASS +- Type: string +- Required: true + +### Advanced options + +Here are the Advanced options specific to mega (Mega). + +#### --mega-debug + +Output more debug from Mega. + +If this flag is set (along with -vv) it will print further debugging +information from the mega backend. + +Properties: + +- Config: debug +- Env Var: RCLONE_MEGA_DEBUG +- Type: bool +- Default: false + +#### --mega-hard-delete + +Delete files permanently rather than putting them into the trash. + +Normally the mega backend will put all deletions into the trash rather +than permanently deleting them. If you specify this then rclone will +permanently delete objects instead. + +Properties: + +- Config: hard_delete +- Env Var: RCLONE_MEGA_HARD_DELETE +- Type: bool +- Default: false + +#### --mega-use-https + +Use HTTPS for transfers. + +MEGA uses plain text HTTP connections by default. +Some ISPs throttle HTTP connections, this causes transfers to become very slow. +Enabling this will force MEGA to use HTTPS for all transfers. +HTTPS is normally not necessary since all data is already encrypted anyway. +Enabling it will increase CPU usage and add network overhead. + +Properties: + +- Config: use_https +- Env Var: RCLONE_MEGA_USE_HTTPS +- Type: bool +- Default: false + +#### --mega-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_MEGA_ENCODING +- Type: MultiEncoder +- Default: Slash,InvalidUtf8,Dot + + + +### Process `killed` + +On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). + +## Limitations + +This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource +go library implementing the Mega API. There doesn't appear to be any +documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code +so there are likely quite a few errors still remaining in this library. + +Mega allows duplicate files which may confuse rclone. + +# Memory + +The memory backend is an in RAM backend. It does not persist its +data - use the local backend for that. + +The memory backend behaves like a bucket-based remote (e.g. like +s3). Because it has no parameters you can just use it with the +`:memory:` remote name. + +## Configuration + +You can configure it as a remote like this with `rclone config` too if +you want to: +
      +

      No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Memory  "memory" [snip] Storage> memory ** See help for memory backend at: https://rclone.org/memory/ **

      +

      Remote config

      - - - - - - - - - - + - - - +
      CharacterValueReplacement
      NUL0x00[remote]
      /0x2Ftype = memory
      -

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Duplicated files

      -

      Mega can have two files with exactly the same name and path (unlike a normal file system).

      -

      Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

      -

      Use rclone dedupe to fix duplicated files.

      -

      Failure to log-in

      -

      Object not found

      -

      If you are connecting to your Mega remote for the first time, to test access and synchronization, you may receive an error such as

      -
      Failed to create file system for "my-mega-remote:": 
      -couldn't login: Object (typically, node or user) not found
      -

      The diagnostic steps often recommended in the rclone forum start with the MEGAcmd utility. Note that this refers to the official C++ command from https://github.com/meganz/MEGAcmd and not the go language built command from t3rm1n4l/megacmd that is no longer maintained.

      -

      Follow the instructions for installing MEGAcmd and try accessing your remote as they recommend. You can establish whether or not you can log in using MEGAcmd, and obtain diagnostic information to help you, and search or work with others in the forum.

      -
      MEGA CMD> login me@example.com
      -Password:
      -Fetching nodes ...
      -Loading transfers from local cache
      -Login complete as me@example.com
      -me@example.com:/$ 
      -

      Note that some have found issues with passwords containing special characters. If you can not log on with rclone, but MEGAcmd logs on just fine, then consider changing your password temporarily to pure alphanumeric characters, in case that helps.

      -

      Repeated commands blocks access

      -

      Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands.

      -

      For example, executing this command 90 times in a row rclone link remote:file will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again.

      -

      You can mitigate this issue by mounting the remote it with rclone mount. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd and then use rclone rc to run the commands over the API to avoid logging in each time.

      -

      Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue.

      -

      If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing...

      -

      Note that this has been observed by trial and error and might not be set in stone.

      -

      Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach.

      -

      Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.

      -

      Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

      -

      So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

      -

      Standard options

      -

      Here are the Standard options specific to mega (Mega).

      -

      --mega-user

      -

      User name.

      -

      Properties:

      -
        -
      • Config: user
      • -
      • Env Var: RCLONE_MEGA_USER
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      --mega-pass

      -

      Password.

      -

      NB Input to this must be obscured - see rclone obscure.

      -

      Properties:

      -
        -
      • Config: pass
      • -
      • Env Var: RCLONE_MEGA_PASS
      • -
      • Type: string
      • -
      • Required: true
      • -
      -

      Advanced options

      -

      Here are the Advanced options specific to mega (Mega).

      -

      --mega-debug

      -

      Output more debug from Mega.

      -

      If this flag is set (along with -vv) it will print further debugging information from the mega backend.

      -

      Properties:

      -
        -
      • Config: debug
      • -
      • Env Var: RCLONE_MEGA_DEBUG
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --mega-hard-delete

      -

      Delete files permanently rather than putting them into the trash.

      -

      Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.

      -

      Properties:

      -
        -
      • Config: hard_delete
      • -
      • Env Var: RCLONE_MEGA_HARD_DELETE
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --mega-use-https

      -

      Use HTTPS for transfers.

      -

      MEGA uses plain text HTTP connections by default. Some ISPs throttle HTTP connections, this causes transfers to become very slow. Enabling this will force MEGA to use HTTPS for all transfers. HTTPS is normally not necessary since all data is already encrypted anyway. Enabling it will increase CPU usage and add network overhead.

      -

      Properties:

      -
        -
      • Config: use_https
      • -
      • Env Var: RCLONE_MEGA_USE_HTTPS
      • -
      • Type: bool
      • -
      • Default: false
      • -
      -

      --mega-encoding

      -

      The encoding for the backend.

      -

      See the encoding section in the overview for more info.

      -

      Properties:

      -
        -
      • Config: encoding
      • -
      • Env Var: RCLONE_MEGA_ENCODING
      • -
      • Type: MultiEncoder
      • -
      • Default: Slash,InvalidUtf8,Dot
      • -
      -

      Limitations

      -

      This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

      -

      Mega allows duplicate files which may confuse rclone.

      -

      Memory

      -

      The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.

      -

      The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.

      -

      Configuration

      -

      You can configure it as a remote like this with rclone config too if you want to:

      -
      No remotes found, make a new one?
      -n) New remote
      -s) Set configuration password
      -q) Quit config
      -n/s/q> n
      -name> remote
      -Type of storage to configure.
      -Enter a string value. Press Enter for the default ("").
      -Choose a number from below, or type in your own value
      -[snip]
      -XX / Memory
      -   \ "memory"
      -[snip]
      -Storage> memory
      -** See help for memory backend at: https://rclone.org/memory/ **
      +
        +
      1. Yes this is OK (default)
      2. +
      3. Edit this remote
      4. +
      5. Delete this remote y/e/d> y
      6. +
      +
      
      +Because the memory backend isn't persistent it is most useful for
      +testing or with an rclone server or rclone mount, e.g.
       
      -Remote config
      +    rclone mount :memory: /mnt/tmp
      +    rclone serve webdav :memory:
      +    rclone serve sftp :memory:
       
      ---------------------
      -[remote]
      -type = memory
      ---------------------
      -y) Yes this is OK (default)
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -

      Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, e.g.

      -
      rclone mount :memory: /mnt/tmp
      -rclone serve webdav :memory:
      -rclone serve sftp :memory:
      -

      Modified time and hashes

      -

      The memory backend supports MD5 hashes and modification times accurate to 1 nS.

      -

      Restricted filename characters

      -

      The memory backend replaces the default restricted characters set.

      -

      Akamai NetStorage

      -

      Paths are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.

      -

      For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a CP code. [your-domain-prefix]-nsu.akamaihd.net

      -

      See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config to walk you through the setup process.

      -

      Configuration

      -

      Here's an example of how to make a remote called ns1.

      -
        -
      1. To begin the interactive configuration process, enter this command:
      2. +### Modified time and hashes + +The memory backend supports MD5 hashes and modification times accurate to 1 nS. + +### Restricted filename characters + +The memory backend replaces the [default restricted characters +set](https://rclone.org/overview/#restricted-characters). + + + + +# Akamai NetStorage + +Paths are specified as `remote:` +You may put subdirectories in too, e.g. `remote:/path/to/dir`. +If you have a CP code you can use that as the folder after the domain such as \<domain>\/\<cpcode>\/\<internal directories within cpcode>. + +For example, this is commonly configured with or without a CP code: +* **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` +* **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` + + +See all buckets + rclone lsd remote: +The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process. + +## Configuration + +Here's an example of how to make a remote called `ns1`. + +1. To begin the interactive configuration process, enter this command: +
      +

      rclone config

      +
      
      +2. Type `n` to create a new remote.
      +
      +
        +
      1. New remote
      2. +
      3. Delete remote
      4. +
      5. Quit config e/n/d/q> n
      -
      rclone config
      -
        -
      1. Type n to create a new remote.
      2. +
        
        +3. For this example, enter `ns1` when you reach the name> prompt.
        +
        +

        name> ns1

        +
        
        +4. Enter `netstorage` as the type of storage to configure.
        +
        +

        Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value XX / NetStorage  "netstorage" Storage> netstorage

        +
        
        +5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
        +
        +
        +

        Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / HTTP protocol  "http" 2 / HTTPS protocol  "https" protocol> 1

        +
        
        +6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `<domain>/<cpcode>/<content>/`
        +
        +

        Enter a string value. Press Enter for the default (""). host> baseball-nsu.akamaihd.net/123456/content/

        +
        
        +7. Set the netstorage account name
        +

        Enter a string value. Press Enter for the default (""). account> username

        +
        
        +8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret.
        +Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption.
        +
        +
          +
        1. Yes type in my own password
        2. +
        3. Generate random password y/g> y Enter the password: password: Confirm the password: password:
        -
        n) New remote
        -d) Delete remote
        -q) Quit config
        -e/n/d/q> n
        -
          -
        1. For this example, enter ns1 when you reach the name> prompt.
        2. -
        -
        name> ns1
        -
          -
        1. Enter netstorage as the type of storage to configure.
        2. -
        -
        Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -XX / NetStorage
        -   \ "netstorage"
        -Storage> netstorage
        -
          -
        1. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
        2. -
        -
        Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        - 1 / HTTP protocol
        -   \ "http"
        - 2 / HTTPS protocol
        -   \ "https"
        -protocol> 1
        -
          -
        1. Specify your NetStorage host, CP code, and any necessary content paths using this format: <domain>/<cpcode>/<content>/
        2. -
        -
        Enter a string value. Press Enter for the default ("").
        -host> baseball-nsu.akamaihd.net/123456/content/
        -
          -
        1. Set the netstorage account name
        2. -
        -
        Enter a string value. Press Enter for the default ("").
        -account> username
        -
          -
        1. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the y option to set your own password then enter your secret. Note: The secret is stored in the rclone.conf file with hex-encoded encryption.
        2. -
        -
        y) Yes type in my own password
        -g) Generate random password
        -y/g> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        -
          -
        1. View the summary and confirm your remote configuration.
        2. -
        -
        [ns1]
        -type = netstorage
        -protocol = http
        -host = baseball-nsu.akamaihd.net/123456/content/
        -account = username
        -secret = *** ENCRYPTED ***
        ---------------------
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        This remote is called ns1 and can now be used.

        -

        Example operations

        -

        Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.

        -

        See contents of a directory in your project

        -
        rclone lsd ns1:/974012/testing/
        -

        Sync the contents local with remote

        -
        rclone sync . ns1:/974012/testing/
        -

        Upload local content to remote

        -
        rclone copy notes.txt ns1:/974012/testing/
        -

        Delete content on remote

        -
        rclone delete ns1:/974012/testing/notes.txt
        -

        Move or copy content between CP codes.

        -

        Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.

        -
        rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
        -

        Features

        - -

        The Netstorage backend changes the rclone --links, -l behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.

        -

        This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.

        -

        Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.

        -

        Note: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.

        -

        Implicit vs. Explicit Directories

        -

        With NetStorage, directories can exist in one of two forms:

        -
          -
        1. Explicit Directory. This is an actual, physical directory that you have created in a storage group.
        2. -
        3. Implicit Directory. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
        4. -
        -

        Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.

        -

        --fast-list / ListR support

        -

        NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.

        -
          -
        • Rclone will use the ListR method for some commands by default. Commands such as lsf -R will use ListR by default. To disable this, include the --disable listR option to use the non-recursive method of listing objects.

        • -
        • Rclone will not use the ListR method for some commands. Commands such as sync don't use ListR by default. To force using the ListR method, include the --fast-list option.

        • -
        -

        There are pros and cons of using the ListR method, refer to rclone documentation. In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.

        -

        Note: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.

        -

        Purge

        -

        NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.

        -

        Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.

        -

        Standard options

        -

        Here are the Standard options specific to netstorage (Akamai NetStorage).

        -

        --netstorage-host

        -

        Domain+path of NetStorage host to connect to.

        -

        Format should be <domain>/<internal folders>

        -

        Properties:

        -
          -
        • Config: host
        • -
        • Env Var: RCLONE_NETSTORAGE_HOST
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --netstorage-account

        -

        Set the NetStorage account name

        -

        Properties:

        -
          -
        • Config: account
        • -
        • Env Var: RCLONE_NETSTORAGE_ACCOUNT
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --netstorage-secret

        -

        Set the NetStorage account secret/G2O key for authentication.

        -

        Please choose the 'y' option to set your own password then enter your secret.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: secret
        • -
        • Env Var: RCLONE_NETSTORAGE_SECRET
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to netstorage (Akamai NetStorage).

        -

        --netstorage-protocol

        -

        Select between HTTP or HTTPS protocol.

        -

        Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.

        -

        Properties:

        -
          -
        • Config: protocol
        • -
        • Env Var: RCLONE_NETSTORAGE_PROTOCOL
        • -
        • Type: string
        • -
        • Default: "https"
        • -
        • Examples: -
            -
          • "http" -
              -
            • HTTP protocol
            • -
          • -
          • "https" -
              -
            • HTTPS protocol
            • -
          • -
        • -
        -

        Backend commands

        -

        Here are the commands specific to the netstorage backend.

        -

        Run them with

        -
        rclone backend COMMAND remote:
        -

        The help below will explain what arguments each command takes.

        -

        See the backend command for more info on how to pass options and arguments.

        -

        These can be run on a running backend using the rc command backend/command.

        -

        du

        -

        Return disk usage information for a specified directory

        -
        rclone backend du remote: [options] [<arguments>+]
        -

        The usage information returned, includes the targeted directory as well as all files stored in any sub-directories that may exist.

        - -

        You can create a symbolic link in ObjectStore with the symlink action.

        -
        rclone backend symlink remote: [options] [<arguments>+]
        -

        The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>

        -

        Microsoft Azure Blob Storage

        -

        Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

        -

        Configuration

        -

        Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Microsoft Azure Blob Storage
        -   \ "azureblob"
        -[snip]
        -Storage> azureblob
        -Storage Account Name
        -account> account_name
        -Storage Account Key
        -key> base64encodedkey==
        -Endpoint for the service - leave blank normally.
        -endpoint> 
        -Remote config
        ---------------------
        -[remote]
        -account = account_name
        -key = base64encodedkey==
        -endpoint = 
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        See all containers

        -
        rclone lsd remote:
        -

        Make a new container

        -
        rclone mkdir remote:container
        -

        List the contents of a container

        -
        rclone ls remote:container
        -

        Sync /home/local/directory to the remote container, deleting any excess files in the container.

        -
        rclone sync --interactive /home/local/directory remote:container
        -

        --fast-list

        -

        This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

        -

        Modified time

        -

        The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no performance overhead to using it.

        -

        If you wish to use the Azure standard LastModified time stored on the object as the modified time, then use the --use-server-modtime flag. Note that rclone can't set LastModified, so using the --update flag when syncing is recommended if using --use-server-modtime.

        -

        Performance

        -

        When uploading large files, increasing the value of --azureblob-upload-concurrency will increase performance at the cost of using more memory. The default of 16 is set quite conservatively to use less memory. It maybe be necessary raise it to 64 or higher to fully utilize a 1 GBit/s link with a single file transfer.

        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        /0x2F
        \0x5C
        -

        File names can also not end with the following characters. These only get replaced if they are the last character in the name:

        - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        .0x2E
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Hashes

        -

        MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, e.g. the local disk.

        -

        Authentication

        -

        There are a number of ways of supplying credentials for Azure Blob Storage. Rclone tries them in the order of the sections below.

        -

        Env Auth

        -

        If the env_auth config parameter is true then rclone will pull credentials from the environment or runtime.

        -

        It tries these authentication methods in this order:

        -
          -
        1. Environment Variables
        2. -
        3. Managed Service Identity Credentials
        4. -
        5. Azure CLI credentials (as used by the az tool)
        6. -
        -

        These are described in the following sections

        -
        Env Auth: 1. Environment Variables
        -

        If env_auth is set and environment variables are present rclone authenticates a service principal with a secret or certificate, or a user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order:

        -
          -
        1. Service principal with client secret -
            -
          • AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID.
          • -
          • AZURE_CLIENT_ID: the service principal's client ID
          • -
          • AZURE_CLIENT_SECRET: one of the service principal's client secrets
          • -
        2. -
        3. Service principal with certificate -
            -
          • AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID.
          • -
          • AZURE_CLIENT_ID: the service principal's client ID
          • -
          • AZURE_CLIENT_CERTIFICATE_PATH: path to a PEM or PKCS12 certificate file including the private key.
          • -
          • AZURE_CLIENT_CERTIFICATE_PASSWORD: (optional) password for the certificate file.
          • -
          • AZURE_CLIENT_SEND_CERTIFICATE_CHAIN: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
          • -
        4. -
        5. User with username and password -
            -
          • AZURE_TENANT_ID: (optional) tenant to authenticate in. Defaults to "organizations".
          • -
          • AZURE_CLIENT_ID: client ID of the application the user will authenticate to
          • -
          • AZURE_USERNAME: a username (usually an email address)
          • -
          • AZURE_PASSWORD: the user's password
          • -
        6. -
        7. Workload Identity -
            -
          • AZURE_TENANT_ID: Tenant to authenticate in.
          • -
          • AZURE_CLIENT_ID: Client ID of the application the user will authenticate to.
          • -
          • AZURE_FEDERATED_TOKEN_FILE: Path to projected service account token file.
          • -
          • AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
          • -
        8. -
        -
        Env Auth: 2. Managed Service Identity Credentials
        -

        When using Managed Service Identity if the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default.

        -

        If the resource has multiple user-assigned identities you will need to unset env_auth and set use_msi instead. See the use_msi section.

        -
        Env Auth: 3. Azure CLI credentials (as used by the az tool)
        -

        Credentials created with the az tool can be picked up using env_auth.

        -

        For example if you were to login with a service principal like this:

        -
        az login --service-principal -u XXX -p XXX --tenant XXX
        -

        Then you could access rclone resources like this:

        -
        rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
        -

        Or

        -
        rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
        -

        Which is analogous to using the az tool:

        -
        az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
        -

        Account and Shared Key

        -

        This is the most straight forward and least flexible way. Just fill in the account and key lines and leave the rest blank.

        -

        SAS URL

        -

        This can be an account level SAS URL or container level SAS URL.

        -

        To use it leave account and key blank and fill in sas_url.

        -

        An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.

        -

        If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g.

        -
        rclone ls azureblob:container
        -

        You can also list the single container from the root. This will only show the container specified by the SAS URL.

        -
        $ rclone lsd azureblob:
        -container/
        -

        Note that you can't see or access any other containers - this will fail

        -
        rclone ls azureblob:othercontainer
        -

        Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.

        -

        Service principal with client secret

        -

        If these variables are set, rclone will authenticate with a service principal with a client secret.

        -
          -
        • tenant: ID of the service principal's tenant. Also called its "directory" ID.
        • -
        • client_id: the service principal's client ID
        • -
        • client_secret: one of the service principal's client secrets
        • -
        -

        The credentials can also be placed in a file using the service_principal_file configuration option.

        -

        Service principal with certificate

        -

        If these variables are set, rclone will authenticate with a service principal with certificate.

        -
          -
        • tenant: ID of the service principal's tenant. Also called its "directory" ID.
        • -
        • client_id: the service principal's client ID
        • -
        • client_certificate_path: path to a PEM or PKCS12 certificate file including the private key.
        • -
        • client_certificate_password: (optional) password for the certificate file.
        • -
        • client_send_certificate_chain: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
        • -
        -

        NB client_certificate_password must be obscured - see rclone obscure.

        -

        User with username and password

        -

        If these variables are set, rclone will authenticate with username and password.

        -
          -
        • tenant: (optional) tenant to authenticate in. Defaults to "organizations".
        • -
        • client_id: client ID of the application the user will authenticate to
        • -
        • username: a username (usually an email address)
        • -
        • password: the user's password
        • -
        -

        Microsoft doesn't recommend this kind of authentication, because it's less secure than other authentication flows. This method is not interactive, so it isn't compatible with any form of multi-factor authentication, and the application must already have user or admin consent. This credential can only authenticate work and school accounts; it can't authenticate Microsoft accounts.

        -

        NB password must be obscured - see rclone obscure.

        -

        Managed Service Identity Credentials

        -

        If use_msi is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth needs to be unset to use this.

        -

        However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.

        -

        If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this is is equivalent to using env_auth.

        -

        Standard options

        -

        Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).

        -

        --azureblob-account

        -

        Azure Storage Account Name.

        -

        Set this to the Azure Storage Account Name in use.

        -

        Leave blank to use SAS URL or Emulator, otherwise it needs to be set.

        -

        If this is blank and if env_auth is set it will be read from the environment variable AZURE_STORAGE_ACCOUNT_NAME if possible.

        -

        Properties:

        -
          -
        • Config: account
        • -
        • Env Var: RCLONE_AZUREBLOB_ACCOUNT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-env-auth

        -

        Read credentials from runtime (environment variables, CLI or MSI).

        -

        See the authentication docs for full info.

        -

        Properties:

        -
          -
        • Config: env_auth
        • -
        • Env Var: RCLONE_AZUREBLOB_ENV_AUTH
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-key

        -

        Storage Account Shared Key.

        -

        Leave blank to use SAS URL or Emulator.

        -

        Properties:

        -
          -
        • Config: key
        • -
        • Env Var: RCLONE_AZUREBLOB_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-sas-url

        -

        SAS URL for container level access only.

        -

        Leave blank if using account/key or Emulator.

        -

        Properties:

        -
          -
        • Config: sas_url
        • -
        • Env Var: RCLONE_AZUREBLOB_SAS_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-tenant

        -

        ID of the service principal's tenant. Also called its directory ID.

        -

        Set this if using - Service principal with client secret - Service principal with certificate - User with username and password

        -

        Properties:

        -
          -
        • Config: tenant
        • -
        • Env Var: RCLONE_AZUREBLOB_TENANT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-client-id

        -

        The ID of the client in use.

        -

        Set this if using - Service principal with client secret - Service principal with certificate - User with username and password

        -

        Properties:

        -
          -
        • Config: client_id
        • -
        • Env Var: RCLONE_AZUREBLOB_CLIENT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-client-secret

        -

        One of the service principal's client secrets

        -

        Set this if using - Service principal with client secret

        -

        Properties:

        -
          -
        • Config: client_secret
        • -
        • Env Var: RCLONE_AZUREBLOB_CLIENT_SECRET
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-client-certificate-path

        -

        Path to a PEM or PKCS12 certificate file including the private key.

        -

        Set this if using - Service principal with certificate

        -

        Properties:

        -
          -
        • Config: client_certificate_path
        • -
        • Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-client-certificate-password

        -

        Password for the certificate file (optional).

        -

        Optionally set this if using - Service principal with certificate

        -

        And the certificate has a password.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: client_certificate_password
        • -
        • Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).

        -

        --azureblob-client-send-certificate-chain

        -

        Send the certificate chain when using certificate auth.

        -

        Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to true, authentication requests include the x5c header.

        -

        Optionally set this if using - Service principal with certificate

        -

        Properties:

        -
          -
        • Config: client_send_certificate_chain
        • -
        • Env Var: RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-username

        -

        User name (usually an email address)

        -

        Set this if using - User with username and password

        -

        Properties:

        -
          -
        • Config: username
        • -
        • Env Var: RCLONE_AZUREBLOB_USERNAME
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-password

        -

        The user's password

        -

        Set this if using - User with username and password

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: password
        • -
        • Env Var: RCLONE_AZUREBLOB_PASSWORD
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-service-principal-file

        -

        Path to file containing credentials for use with a service principal.

        -

        Leave blank normally. Needed only if you want to use a service principal instead of interactive login.

        -
        $ az ad sp create-for-rbac --name "<name>" \
        -  --role "Storage Blob Data Owner" \
        -  --scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
        -  > azure-principal.json
        -

        See "Create an Azure service principal" and "Assign an Azure role for access to blob data" pages for more details.

        -

        It may be more convenient to put the credentials directly into the rclone config file under the client_id, tenant and client_secret keys instead of setting service_principal_file.

        -

        Properties:

        -
          -
        • Config: service_principal_file
        • -
        • Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-use-msi

        -

        Use a managed service identity to authenticate (only works in Azure).

        -

        When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.

        -

        If the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default. If the resource has multiple user-assigned identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.

        -

        Properties:

        -
          -
        • Config: use_msi
        • -
        • Env Var: RCLONE_AZUREBLOB_USE_MSI
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-msi-object-id

        -

        Object ID of the user-assigned MSI to use, if any.

        -

        Leave blank if msi_client_id or msi_mi_res_id specified.

        -

        Properties:

        -
          -
        • Config: msi_object_id
        • -
        • Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-msi-client-id

        -

        Object ID of the user-assigned MSI to use, if any.

        -

        Leave blank if msi_object_id or msi_mi_res_id specified.

        -

        Properties:

        -
          -
        • Config: msi_client_id
        • -
        • Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-msi-mi-res-id

        -

        Azure resource ID of the user-assigned MSI to use, if any.

        -

        Leave blank if msi_client_id or msi_object_id specified.

        -

        Properties:

        -
          -
        • Config: msi_mi_res_id
        • -
        • Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-use-emulator

        -

        Uses local storage emulator if provided as 'true'.

        -

        Leave blank if using real azure storage endpoint.

        -

        Properties:

        -
          -
        • Config: use_emulator
        • -
        • Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-endpoint

        -

        Endpoint for the service.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: endpoint
        • -
        • Env Var: RCLONE_AZUREBLOB_ENDPOINT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-upload-cutoff

        -

        Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).

        -

        Properties:

        -
          -
        • Config: upload_cutoff
        • -
        • Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-chunk-size

        -

        Upload chunk size.

        -

        Note that this is stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 4Mi
        • -
        -

        --azureblob-upload-concurrency

        -

        Concurrency for multipart uploads.

        -

        This is the number of chunks of the same file that are uploaded concurrently.

        -

        If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

        -

        In tests, upload speed increases almost linearly with upload concurrency. For example to fill a gigabit pipe it may be necessary to raise this to 64. Note that this will use more memory.

        -

        Note that chunks are stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.

        -

        Properties:

        -
          -
        • Config: upload_concurrency
        • -
        • Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
        • -
        • Type: int
        • -
        • Default: 16
        • -
        -

        --azureblob-list-chunk

        -

        Size of blob list.

        -

        This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.

        -

        Properties:

        -
          -
        • Config: list_chunk
        • -
        • Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
        • -
        • Type: int
        • -
        • Default: 5000
        • -
        -

        --azureblob-access-tier

        -

        Access tier of blob: hot, cool or archive.

        -

        Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level

        -

        If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".

        -

        Properties:

        -
          -
        • Config: access_tier
        • -
        • Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --azureblob-archive-tier-delete

        -

        Delete archive tier blobs before overwriting.

        -

        Archive tier blobs cannot be updated. So without this flag, if you attempt to update an archive tier blob, then rclone will produce the error:

        -
        can't update archive tier blob without --azureblob-archive-tier-delete
        -

        With this flag set then before rclone attempts to overwrite an archive tier blob, it will delete the existing blob before uploading its replacement. This has the potential for data loss if the upload fails (unlike updating a normal blob) and also may cost more since deleting archive tier blobs early may be chargable.

        -

        Properties:

        -
          -
        • Config: archive_tier_delete
        • -
        • Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-disable-checksum

        -

        Don't store MD5 checksum with object metadata.

        -

        Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

        -

        Properties:

        -
          -
        • Config: disable_checksum
        • -
        • Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-memory-pool-flush-time

        -

        How often internal memory buffer pools will be flushed.

        -

        Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

        -

        Properties:

        -
          -
        • Config: memory_pool_flush_time
        • -
        • Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
        • -
        • Type: Duration
        • -
        • Default: 1m0s
        • -
        -

        --azureblob-memory-pool-use-mmap

        -

        Whether to use mmap buffers in internal memory pool.

        -

        Properties:

        -
          -
        • Config: memory_pool_use_mmap
        • -
        • Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_AZUREBLOB_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
        • -
        -

        --azureblob-public-access

        -

        Public access level of a container: blob or container.

        -

        Properties:

        -
          -
        • Config: public_access
        • -
        • Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • The container and its blobs can be accessed only with an authorized request.
            • -
            • It's a default value.
            • -
          • -
          • "blob" -
              -
            • Blob data within this container can be read via anonymous request.
            • -
          • -
          • "container" -
              -
            • Allow full public read access for container and blob data.
            • -
          • -
        • -
        -

        --azureblob-directory-markers

        -

        Upload an empty object with a trailing slash when a new directory is created

        -

        Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with "/", to persist the folder.

        -

        This object also has the metadata "hdi_isfolder = true" to conform to the Microsoft standard.

        -

        Properties:

        -
          -
        • Config: directory_markers
        • -
        • Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-no-check-container

        -

        If set, don't attempt to check the container exists or create it.

        -

        This can be useful when trying to minimise the number of transactions rclone does if you know the container exists already.

        -

        Properties:

        -
          -
        • Config: no_check_container
        • -
        • Env Var: RCLONE_AZUREBLOB_NO_CHECK_CONTAINER
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --azureblob-no-head-object

        -

        If set, do not do HEAD before GET when getting objects.

        -

        Properties:

        -
          -
        • Config: no_head_object
        • -
        • Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        Custom upload headers

        -

        You can set custom upload headers with the --header-upload flag.

        -
          -
        • Cache-Control
        • -
        • Content-Disposition
        • -
        • Content-Encoding
        • -
        • Content-Language
        • -
        • Content-Type
        • -
        -

        Eg --header-upload "Content-Type: text/potato"

        -

        Limitations

        -

        MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

        -

        rclone about is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

        -

        See List of backends that do not support rclone about and rclone about

        -

        Azure Storage Emulator Support

        -

        You can run rclone with the storage emulator (usually azurite).

        -

        To do this, just set up a new remote with rclone config following the instructions in the introduction and set use_emulator in the advanced settings as true. You do not need to provide a default account name nor an account key. But you can override them in the account and key options. (Prior to v1.61 they were hard coded to azurite's devstoreaccount1.)

        -

        Also, if you want to access a storage emulator instance running on a different machine, you can override the endpoint parameter in the advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1 (e.g. http://10.254.2.5:10000/devstoreaccount1).

        -

        Microsoft OneDrive

        -

        Paths are specified as remote:path

        -

        Paths may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        e) Edit existing remote
        -n) New remote
        -d) Delete remote
        -r) Rename remote
        -c) Copy remote
        -s) Set configuration password
        -q) Quit config
        -e/n/d/r/c/s/q> n
        -name> remote
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Microsoft OneDrive
        -   \ "onedrive"
        -[snip]
        -Storage> onedrive
        -Microsoft App Client Id
        +
        
        +9. View the summary and confirm your remote configuration.
        +
        +

        [ns1] type = netstorage protocol = http host = baseball-nsu.akamaihd.net/123456/content/ account = username secret = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +This remote is called `ns1` and can now be used.
        +
        +## Example operations
        +
        +Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.
        +
        +### See contents of a directory in your project
        +
        +    rclone lsd ns1:/974012/testing/
        +
        +### Sync the contents local with remote
        +
        +    rclone sync . ns1:/974012/testing/
        +
        +### Upload local content to remote
        +    rclone copy notes.txt ns1:/974012/testing/
        +
        +### Delete content on remote
        +    rclone delete ns1:/974012/testing/notes.txt
        +
        +### Move or copy content between CP codes.
        +
        +Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.
        +
        +    rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
        +
        +## Features
        +
        +### Symlink Support
        +
        +The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.
        +
        +This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.
        +
        +Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.
        +
        +**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.
        +
        +### Implicit vs. Explicit Directories
        +
        +With NetStorage, directories can exist in one of two forms:
        +
        +1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group.
        +2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
        +
        +Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
        +
        +### `--fast-list` / ListR support
        +
        +NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.
        +
        +* **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects.
        +
        +* **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the  `--fast-list` option.
        +
        +There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.
        +
        +**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.
        +
        +### Purge
        +
        +NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
        +
        +**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to netstorage (Akamai NetStorage).
        +
        +#### --netstorage-host
        +
        +Domain+path of NetStorage host to connect to.
        +
        +Format should be `<domain>/<internal folders>`
        +
        +Properties:
        +
        +- Config:      host
        +- Env Var:     RCLONE_NETSTORAGE_HOST
        +- Type:        string
        +- Required:    true
        +
        +#### --netstorage-account
        +
        +Set the NetStorage account name
        +
        +Properties:
        +
        +- Config:      account
        +- Env Var:     RCLONE_NETSTORAGE_ACCOUNT
        +- Type:        string
        +- Required:    true
        +
        +#### --netstorage-secret
        +
        +Set the NetStorage account secret/G2O key for authentication.
        +
        +Please choose the 'y' option to set your own password then enter your secret.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      secret
        +- Env Var:     RCLONE_NETSTORAGE_SECRET
        +- Type:        string
        +- Required:    true
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to netstorage (Akamai NetStorage).
        +
        +#### --netstorage-protocol
        +
        +Select between HTTP or HTTPS protocol.
        +
        +Most users should choose HTTPS, which is the default.
        +HTTP is provided primarily for debugging purposes.
        +
        +Properties:
        +
        +- Config:      protocol
        +- Env Var:     RCLONE_NETSTORAGE_PROTOCOL
        +- Type:        string
        +- Default:     "https"
        +- Examples:
        +    - "http"
        +        - HTTP protocol
        +    - "https"
        +        - HTTPS protocol
        +
        +## Backend commands
        +
        +Here are the commands specific to the netstorage backend.
        +
        +Run them with
        +
        +    rclone backend COMMAND remote:
        +
        +The help below will explain what arguments each command takes.
        +
        +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
        +info on how to pass options and arguments.
        +
        +These can be run on a running backend using the rc command
        +[backend/command](https://rclone.org/rc/#backend-command).
        +
        +### du
        +
        +Return disk usage information for a specified directory
        +
        +    rclone backend du remote: [options] [<arguments>+]
        +
        +The usage information returned, includes the targeted directory as well as all
        +files stored in any sub-directories that may exist.
        +
        +### symlink
        +
        +You can create a symbolic link in ObjectStore with the symlink action.
        +
        +    rclone backend symlink remote: [options] [<arguments>+]
        +
        +The desired path location (including applicable sub-directories) ending in
        +the object that will be the target of the symlink (for example, /links/mylink).
        +Include the file extension for the object, if applicable.
        +`rclone backend symlink <src> <path>`
        +
        +
        +
        +#  Microsoft Azure Blob Storage
        +
        +Paths are specified as `remote:container` (or `remote:` for the `lsd`
        +command.)  You may put subdirectories in too, e.g.
        +`remote:container/path/to/dir`.
        +
        +## Configuration
        +
        +Here is an example of making a Microsoft Azure Blob Storage
        +configuration.  For a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Blob Storage  "azureblob" [snip] Storage> azureblob Storage Account Name account> account_name Storage Account Key key> base64encodedkey== Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = account_name key = base64encodedkey== endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +See all containers
        +
        +    rclone lsd remote:
        +
        +Make a new container
        +
        +    rclone mkdir remote:container
        +
        +List the contents of a container
        +
        +    rclone ls remote:container
        +
        +Sync `/home/local/directory` to the remote container, deleting any excess
        +files in the container.
        +
        +    rclone sync --interactive /home/local/directory remote:container
        +
        +### --fast-list
        +
        +This remote supports `--fast-list` which allows you to use fewer
        +transactions in exchange for more memory. See the [rclone
        +docs](https://rclone.org/docs/#fast-list) for more details.
        +
        +### Modified time
        +
        +The modified time is stored as metadata on the object with the `mtime`
        +key.  It is stored using RFC3339 Format time with nanosecond
        +precision.  The metadata is supplied during directory listings so
        +there is no performance overhead to using it.
        +
        +If you wish to use the Azure standard `LastModified` time stored on
        +the object as the modified time, then use the `--use-server-modtime`
        +flag. Note that rclone can't set `LastModified`, so using the
        +`--update` flag when syncing is recommended if using
        +`--use-server-modtime`.
        +
        +### Performance
        +
        +When uploading large files, increasing the value of
        +`--azureblob-upload-concurrency` will increase performance at the cost
        +of using more memory. The default of 16 is set quite conservatively to
        +use less memory. It maybe be necessary raise it to 64 or higher to
        +fully utilize a 1 GBit/s link with a single file transfer.
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| /         | 0x2F  | /           |
        +| \         | 0x5C  | \           |
        +
        +File names can also not end with the following characters.
        +These only get replaced if they are the last character in the name:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| .         | 0x2E  | .          |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +### Hashes
        +
        +MD5 hashes are stored with blobs.  However blobs that were uploaded in
        +chunks only have an MD5 if the source remote was capable of MD5
        +hashes, e.g. the local disk.
        +
        +### Authentication {#authentication}
        +
        +There are a number of ways of supplying credentials for Azure Blob
        +Storage. Rclone tries them in the order of the sections below.
        +
        +#### Env Auth
        +
        +If the `env_auth` config parameter is `true` then rclone will pull
        +credentials from the environment or runtime.
        +
        +It tries these authentication methods in this order:
        +
        +1. Environment Variables
        +2. Managed Service Identity Credentials
        +3. Azure CLI credentials (as used by the az tool)
        +
        +These are described in the following sections
        +
        +##### Env Auth: 1. Environment Variables
        +
        +If `env_auth` is set and environment variables are present rclone
        +authenticates a service principal with a secret or certificate, or a
        +user with a password, depending on which environment variable are set.
        +It reads configuration from these variables, in the following order:
        +
        +1. Service principal with client secret
        +    - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
        +    - `AZURE_CLIENT_ID`: the service principal's client ID
        +    - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
        +2. Service principal with certificate
        +    - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
        +    - `AZURE_CLIENT_ID`: the service principal's client ID
        +    - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
        +    - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
        +    - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
        +3. User with username and password
        +    - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
        +    - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
        +    - `AZURE_USERNAME`: a username (usually an email address)
        +    - `AZURE_PASSWORD`: the user's password
        +4. Workload Identity
        +    - `AZURE_TENANT_ID`: Tenant to authenticate in.
        +    - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
        +    - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
        +    - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
        +
        +
        +##### Env Auth: 2. Managed Service Identity Credentials
        +
        +When using Managed Service Identity if the VM(SS) on which this
        +program is running has a system-assigned identity, it will be used by
        +default. If the resource has no system-assigned but exactly one
        +user-assigned identity, the user-assigned identity will be used by
        +default.
        +
        +If the resource has multiple user-assigned identities you will need to
        +unset `env_auth` and set `use_msi` instead. See the [`use_msi`
        +section](#use_msi).
        +
        +##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
        +
        +Credentials created with the `az` tool can be picked up using `env_auth`.
        +
        +For example if you were to login with a service principal like this:
        +
        +    az login --service-principal -u XXX -p XXX --tenant XXX
        +
        +Then you could access rclone resources like this:
        +
        +    rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
        +
        +Or
        +
        +    rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
        +
        +Which is analogous to using the `az` tool:
        +
        +    az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
        +
        +#### Account and Shared Key
        +
        +This is the most straight forward and least flexible way.  Just fill
        +in the `account` and `key` lines and leave the rest blank.
        +
        +#### SAS URL
        +
        +This can be an account level SAS URL or container level SAS URL.
        +
        +To use it leave `account` and `key` blank and fill in `sas_url`.
        +
        +An account level SAS URL or container level SAS URL can be obtained
        +from the Azure portal or the Azure Storage Explorer.  To get a
        +container level SAS URL right click on a container in the Azure Blob
        +explorer in the Azure portal.
        +
        +If you use a container level SAS URL, rclone operations are permitted
        +only on a particular container, e.g.
        +
        +    rclone ls azureblob:container
        +
        +You can also list the single container from the root. This will only
        +show the container specified by the SAS URL.
        +
        +    $ rclone lsd azureblob:
        +    container/
        +
        +Note that you can't see or access any other containers - this will
        +fail
        +
        +    rclone ls azureblob:othercontainer
        +
        +Container level SAS URLs are useful for temporarily allowing third
        +parties access to a single container or putting credentials into an
        +untrusted environment such as a CI build server.
        +
        +#### Service principal with client secret
        +
        +If these variables are set, rclone will authenticate with a service principal with a client secret.
        +
        +- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
        +- `client_id`: the service principal's client ID
        +- `client_secret`: one of the service principal's client secrets
        +
        +The credentials can also be placed in a file using the
        +`service_principal_file` configuration option.
        +
        +#### Service principal with certificate
        +
        +If these variables are set, rclone will authenticate with a service principal with certificate.
        +
        +- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
        +- `client_id`: the service principal's client ID
        +- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
        +- `client_certificate_password`: (optional) password for the certificate file.
        +- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
        +
        +**NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +#### User with username and password
        +
        +If these variables are set, rclone will authenticate with username and password.
        +
        +- `tenant`: (optional) tenant to authenticate in. Defaults to "organizations".
        +- `client_id`: client ID of the application the user will authenticate to
        +- `username`: a username (usually an email address)
        +- `password`: the user's password
        +
        +Microsoft doesn't recommend this kind of authentication, because it's
        +less secure than other authentication flows. This method is not
        +interactive, so it isn't compatible with any form of multi-factor
        +authentication, and the application must already have user or admin
        +consent. This credential can only authenticate work and school
        +accounts; it can't authenticate Microsoft accounts.
        +
        +**NB** `password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +#### Managed Service Identity Credentials {#use_msi}
        +
        +If `use_msi` is set then managed service identity credentials are
        +used. This authentication only works when running in an Azure service.
        +`env_auth` needs to be unset to use this.
        +
        +However if you have multiple user identities to choose from these must
        +be explicitly specified using exactly one of the `msi_object_id`,
        +`msi_client_id`, or `msi_mi_res_id` parameters.
        +
        +If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
        +set, this is is equivalent to using `env_auth`.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
        +
        +#### --azureblob-account
        +
        +Azure Storage Account Name.
        +
        +Set this to the Azure Storage Account Name in use.
        +
        +Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
        +
        +If this is blank and if env_auth is set it will be read from the
        +environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
        +
        +
        +Properties:
        +
        +- Config:      account
        +- Env Var:     RCLONE_AZUREBLOB_ACCOUNT
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-env-auth
        +
        +Read credentials from runtime (environment variables, CLI or MSI).
        +
        +See the [authentication docs](/azureblob#authentication) for full info.
        +
        +Properties:
        +
        +- Config:      env_auth
        +- Env Var:     RCLONE_AZUREBLOB_ENV_AUTH
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-key
        +
        +Storage Account Shared Key.
        +
        +Leave blank to use SAS URL or Emulator.
        +
        +Properties:
        +
        +- Config:      key
        +- Env Var:     RCLONE_AZUREBLOB_KEY
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-sas-url
        +
        +SAS URL for container level access only.
        +
        +Leave blank if using account/key or Emulator.
        +
        +Properties:
        +
        +- Config:      sas_url
        +- Env Var:     RCLONE_AZUREBLOB_SAS_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-tenant
        +
        +ID of the service principal's tenant. Also called its directory ID.
        +
        +Set this if using
        +- Service principal with client secret
        +- Service principal with certificate
        +- User with username and password
        +
        +
        +Properties:
        +
        +- Config:      tenant
        +- Env Var:     RCLONE_AZUREBLOB_TENANT
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-client-id
        +
        +The ID of the client in use.
        +
        +Set this if using
        +- Service principal with client secret
        +- Service principal with certificate
        +- User with username and password
        +
        +
        +Properties:
        +
        +- Config:      client_id
        +- Env Var:     RCLONE_AZUREBLOB_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-client-secret
        +
        +One of the service principal's client secrets
        +
        +Set this if using
        +- Service principal with client secret
        +
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_AZUREBLOB_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-client-certificate-path
        +
        +Path to a PEM or PKCS12 certificate file including the private key.
        +
        +Set this if using
        +- Service principal with certificate
        +
        +
        +Properties:
        +
        +- Config:      client_certificate_path
        +- Env Var:     RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-client-certificate-password
        +
        +Password for the certificate file (optional).
        +
        +Optionally set this if using
        +- Service principal with certificate
        +
        +And the certificate has a password.
        +
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      client_certificate_password
        +- Env Var:     RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
        +
        +#### --azureblob-client-send-certificate-chain
        +
        +Send the certificate chain when using certificate auth.
        +
        +Specifies whether an authentication request will include an x5c header
        +to support subject name / issuer based authentication. When set to
        +true, authentication requests include the x5c header.
        +
        +Optionally set this if using
        +- Service principal with certificate
        +
        +
        +Properties:
        +
        +- Config:      client_send_certificate_chain
        +- Env Var:     RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-username
        +
        +User name (usually an email address)
        +
        +Set this if using
        +- User with username and password
        +
        +
        +Properties:
        +
        +- Config:      username
        +- Env Var:     RCLONE_AZUREBLOB_USERNAME
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-password
        +
        +The user's password
        +
        +Set this if using
        +- User with username and password
        +
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      password
        +- Env Var:     RCLONE_AZUREBLOB_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-service-principal-file
        +
        +Path to file containing credentials for use with a service principal.
        +
        +Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
        +
        +    $ az ad sp create-for-rbac --name "<name>" \
        +      --role "Storage Blob Data Owner" \
        +      --scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
        +      > azure-principal.json
        +
        +See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
        +
        +It may be more convenient to put the credentials directly into the
        +rclone config file under the `client_id`, `tenant` and `client_secret`
        +keys instead of setting `service_principal_file`.
        +
        +
        +Properties:
        +
        +- Config:      service_principal_file
        +- Env Var:     RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-use-msi
        +
        +Use a managed service identity to authenticate (only works in Azure).
        +
        +When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
        +to authenticate to Azure Storage instead of a SAS token or account key.
        +
        +If the VM(SS) on which this program is running has a system-assigned identity, it will
        +be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
        +the user-assigned identity will be used by default. If the resource has multiple user-assigned
        +identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
        +msi_client_id, or msi_mi_res_id parameters.
        +
        +Properties:
        +
        +- Config:      use_msi
        +- Env Var:     RCLONE_AZUREBLOB_USE_MSI
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-msi-object-id
        +
        +Object ID of the user-assigned MSI to use, if any.
        +
        +Leave blank if msi_client_id or msi_mi_res_id specified.
        +
        +Properties:
        +
        +- Config:      msi_object_id
        +- Env Var:     RCLONE_AZUREBLOB_MSI_OBJECT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-msi-client-id
        +
        +Object ID of the user-assigned MSI to use, if any.
        +
        +Leave blank if msi_object_id or msi_mi_res_id specified.
        +
        +Properties:
        +
        +- Config:      msi_client_id
        +- Env Var:     RCLONE_AZUREBLOB_MSI_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-msi-mi-res-id
        +
        +Azure resource ID of the user-assigned MSI to use, if any.
        +
        +Leave blank if msi_client_id or msi_object_id specified.
        +
        +Properties:
        +
        +- Config:      msi_mi_res_id
        +- Env Var:     RCLONE_AZUREBLOB_MSI_MI_RES_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-use-emulator
        +
        +Uses local storage emulator if provided as 'true'.
        +
        +Leave blank if using real azure storage endpoint.
        +
        +Properties:
        +
        +- Config:      use_emulator
        +- Env Var:     RCLONE_AZUREBLOB_USE_EMULATOR
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-endpoint
        +
        +Endpoint for the service.
        +
         Leave blank normally.
        -Enter a string value. Press Enter for the default ("").
        -client_id>
        -Microsoft App Client Secret
        +
        +Properties:
        +
        +- Config:      endpoint
        +- Env Var:     RCLONE_AZUREBLOB_ENDPOINT
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-upload-cutoff
        +
        +Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
        +
        +Properties:
        +
        +- Config:      upload_cutoff
        +- Env Var:     RCLONE_AZUREBLOB_UPLOAD_CUTOFF
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-chunk-size
        +
        +Upload chunk size.
        +
        +Note that this is stored in memory and there may be up to
        +"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
        +in memory.
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_AZUREBLOB_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     4Mi
        +
        +#### --azureblob-upload-concurrency
        +
        +Concurrency for multipart uploads.
        +
        +This is the number of chunks of the same file that are uploaded
        +concurrently.
        +
        +If you are uploading small numbers of large files over high-speed
        +links and these uploads do not fully utilize your bandwidth, then
        +increasing this may help to speed up the transfers.
        +
        +In tests, upload speed increases almost linearly with upload
        +concurrency. For example to fill a gigabit pipe it may be necessary to
        +raise this to 64. Note that this will use more memory.
        +
        +Note that chunks are stored in memory and there may be up to
        +"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
        +in memory.
        +
        +Properties:
        +
        +- Config:      upload_concurrency
        +- Env Var:     RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
        +- Type:        int
        +- Default:     16
        +
        +#### --azureblob-list-chunk
        +
        +Size of blob list.
        +
        +This sets the number of blobs requested in each listing chunk. Default
        +is the maximum, 5000. "List blobs" requests are permitted 2 minutes
        +per megabyte to complete. If an operation is taking longer than 2
        +minutes per megabyte on average, it will time out (
        +[source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval)
        +). This can be used to limit the number of blobs items to return, to
        +avoid the time out.
        +
        +Properties:
        +
        +- Config:      list_chunk
        +- Env Var:     RCLONE_AZUREBLOB_LIST_CHUNK
        +- Type:        int
        +- Default:     5000
        +
        +#### --azureblob-access-tier
        +
        +Access tier of blob: hot, cool or archive.
        +
        +Archived blobs can be restored by setting access tier to hot or
        +cool. Leave blank if you intend to use default access tier, which is
        +set at account level
        +
        +If there is no "access tier" specified, rclone doesn't apply any tier.
        +rclone performs "Set Tier" operation on blobs while uploading, if objects
        +are not modified, specifying "access tier" to new one will have no effect.
        +If blobs are in "archive tier" at remote, trying to perform data transfer
        +operations from remote will not be allowed. User should first restore by
        +tiering blob to "Hot" or "Cool".
        +
        +Properties:
        +
        +- Config:      access_tier
        +- Env Var:     RCLONE_AZUREBLOB_ACCESS_TIER
        +- Type:        string
        +- Required:    false
        +
        +#### --azureblob-archive-tier-delete
        +
        +Delete archive tier blobs before overwriting.
        +
        +Archive tier blobs cannot be updated. So without this flag, if you
        +attempt to update an archive tier blob, then rclone will produce the
        +error:
        +
        +    can't update archive tier blob without --azureblob-archive-tier-delete
        +
        +With this flag set then before rclone attempts to overwrite an archive
        +tier blob, it will delete the existing blob before uploading its
        +replacement.  This has the potential for data loss if the upload fails
        +(unlike updating a normal blob) and also may cost more since deleting
        +archive tier blobs early may be chargable.
        +
        +
        +Properties:
        +
        +- Config:      archive_tier_delete
        +- Env Var:     RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-disable-checksum
        +
        +Don't store MD5 checksum with object metadata.
        +
        +Normally rclone will calculate the MD5 checksum of the input before
        +uploading it so it can add it to metadata on the object. This is great
        +for data integrity checking but can cause long delays for large files
        +to start uploading.
        +
        +Properties:
        +
        +- Config:      disable_checksum
        +- Env Var:     RCLONE_AZUREBLOB_DISABLE_CHECKSUM
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-memory-pool-flush-time
        +
        +How often internal memory buffer pools will be flushed. (no longer used)
        +
        +Properties:
        +
        +- Config:      memory_pool_flush_time
        +- Env Var:     RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
        +- Type:        Duration
        +- Default:     1m0s
        +
        +#### --azureblob-memory-pool-use-mmap
        +
        +Whether to use mmap buffers in internal memory pool. (no longer used)
        +
        +Properties:
        +
        +- Config:      memory_pool_use_mmap
        +- Env Var:     RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_AZUREBLOB_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
        +
        +#### --azureblob-public-access
        +
        +Public access level of a container: blob or container.
        +
        +Properties:
        +
        +- Config:      public_access
        +- Env Var:     RCLONE_AZUREBLOB_PUBLIC_ACCESS
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - The container and its blobs can be accessed only with an authorized request.
        +        - It's a default value.
        +    - "blob"
        +        - Blob data within this container can be read via anonymous request.
        +    - "container"
        +        - Allow full public read access for container and blob data.
        +
        +#### --azureblob-directory-markers
        +
        +Upload an empty object with a trailing slash when a new directory is created
        +
        +Empty folders are unsupported for bucket based remotes, this option
        +creates an empty object ending with "/", to persist the folder.
        +
        +This object also has the metadata "hdi_isfolder = true" to conform to
        +the Microsoft standard.
        + 
        +
        +Properties:
        +
        +- Config:      directory_markers
        +- Env Var:     RCLONE_AZUREBLOB_DIRECTORY_MARKERS
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-no-check-container
        +
        +If set, don't attempt to check the container exists or create it.
        +
        +This can be useful when trying to minimise the number of transactions
        +rclone does if you know the container exists already.
        +
        +
        +Properties:
        +
        +- Config:      no_check_container
        +- Env Var:     RCLONE_AZUREBLOB_NO_CHECK_CONTAINER
        +- Type:        bool
        +- Default:     false
        +
        +#### --azureblob-no-head-object
        +
        +If set, do not do HEAD before GET when getting objects.
        +
        +Properties:
        +
        +- Config:      no_head_object
        +- Env Var:     RCLONE_AZUREBLOB_NO_HEAD_OBJECT
        +- Type:        bool
        +- Default:     false
        +
        +
        +
        +### Custom upload headers
        +
        +You can set custom upload headers with the `--header-upload` flag. 
        +
        +- Cache-Control
        +- Content-Disposition
        +- Content-Encoding
        +- Content-Language
        +- Content-Type
        +
        +Eg `--header-upload "Content-Type: text/potato"`
        +
        +## Limitations
        +
        +MD5 sums are only uploaded with chunked files if the source has an MD5
        +sum.  This will always be the case for a local to azure copy.
        +
        +`rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without
        +this capability cannot determine free space for an rclone mount or
        +use policy `mfs` (most free space) as a member of an rclone union
        +remote.
        +
        +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
        +
        +## Azure Storage Emulator Support
        +
        +You can run rclone with the storage emulator (usually _azurite_).
        +
        +To do this, just set up a new remote with `rclone config` following
        +the instructions in the introduction and set `use_emulator` in the
        +advanced settings as `true`. You do not need to provide a default
        +account name nor an account key. But you can override them in the
        +`account` and `key` options. (Prior to v1.61 they were hard coded to
        +_azurite_'s `devstoreaccount1`.)
        +
        +Also, if you want to access a storage emulator instance running on a
        +different machine, you can override the `endpoint` parameter in the
        +advanced settings, setting it to
        +`http(s)://<host>:<port>/devstoreaccount1`
        +(e.g. `http://10.254.2.5:10000/devstoreaccount1`).
        +
        +#  Microsoft OneDrive
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +The initial setup for OneDrive involves getting a token from
        +Microsoft which you need to do in your browser.  `rclone config` walks
        +you through it.
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +
          +
        1. Edit existing remote
        2. +
        3. New remote
        4. +
        5. Delete remote
        6. +
        7. Rename remote
        8. +
        9. Copy remote
        10. +
        11. Set configuration password
        12. +
        13. Quit config e/n/d/r/c/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Microsoft OneDrive  "onedrive" [snip] Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Microsoft App Client Secret Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Edit advanced config? (y/n)
        14. +
        15. Yes
        16. +
        17. No y/n> n Remote config Use web browser to automatically authenticate rclone with remote?
        18. +
        +
          +
        • Say Y if the machine running rclone has a web browser you can use
        • +
        • Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N.
        • +
        +
          +
        1. Yes
        2. +
        3. No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code Choose a number from below, or type in an existing value 1 / OneDrive Personal or Business  "onedrive" 2 / Sharepoint site  "sharepoint" 3 / Type in driveID  "driveid" 4 / Type in SiteID  "siteid" 5 / Search a Sharepoint site  "search" Your choice> 1 Found 1 drives, please select the one you want to use: 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose drive to use:> 0 Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents Is that okay?
        4. +
        5. Yes
        6. +
        7. No y/n> y -------------------- [remote] type = onedrive token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk drive_type = business --------------------
        8. +
        9. Yes this is OK
        10. +
        11. Edit this remote
        12. +
        13. Delete this remote y/e/d> y
        14. +
        +
        
        +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
        +machine with no Internet browser available.
        +
        +Note that rclone runs a webserver on your local machine to collect the
        +token as returned from Microsoft. This only runs from the moment it
        +opens your browser to the moment you get back the verification
        +code.  This is on `http://127.0.0.1:53682/` and this it may require
        +you to unblock it temporarily if you are running a host firewall.
        +
        +Once configured you can then use `rclone` like this,
        +
        +List directories in top level of your OneDrive
        +
        +    rclone lsd remote:
        +
        +List all the files in your OneDrive
        +
        +    rclone ls remote:
        +
        +To copy a local directory to an OneDrive directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Getting your own Client ID and Key
        +
        +rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config.
        +The default Client ID and Key are shared by all rclone users when performing requests.
        +
        +You may choose to create and use your own Client ID, in case the default one does not work well for you. 
        +For example, you might see throttling.
        +
        +#### Creating Client ID for OneDrive Personal
        +
        +To create your own Client ID, please follow these steps:
        +
        +1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`.
        +2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use.
        +3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards).
        +4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`.
        +5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom.
        +
        +Now the application is complete. Run `rclone config` to create or edit a OneDrive remote.
        +Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
        +
        +The access_scopes option allows you to configure the permissions requested by rclone.
        +See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes.
        +
        +The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options.
        +
        +#### Creating Client ID for OneDrive Business
        +
        +The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization.
        +A common error is that the publisher of the App is not verified.
        +
        +You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below.
        +
        +1. Make sure to create the App with your business account.
        +2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App.
        +3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization.
        +4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`.
        +5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`.
        +
        +Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
        +
        +
        +### Modification time and hashes
        +
        +OneDrive allows modification times to be set on objects accurate to 1
        +second.  These will be used to detect whether objects need syncing or
        +not.
        +
        +OneDrive Personal, OneDrive for Business and Sharepoint Server support
        +[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).
        +
        +Before rclone 1.62 the default hash for Onedrive Personal was `SHA1`.
        +For rclone 1.62 and above the default for all Onedrive backends is
        +`QuickXorHash`.
        +
        +Starting from July 2023 `SHA1` support is being phased out in Onedrive
        +Personal in favour of `QuickXorHash`. If necessary the
        +`--onedrive-hash-type` flag (or `hash_type` config option) can be used
        +to select `SHA1` during the transition period if this is important
        +your workflow.
        +
        +For all types of OneDrive you can use the `--checksum` flag.
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| "         | 0x22  | "          |
        +| *         | 0x2A  | *          |
        +| :         | 0x3A  | :          |
        +| <         | 0x3C  | <          |
        +| >         | 0x3E  | >          |
        +| ?         | 0x3F  | ?          |
        +| \         | 0x5C  | \          |
        +| \|        | 0x7C  | |          |
        +
        +File names can also not end with the following characters.
        +These only get replaced if they are the last character in the name:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| SP        | 0x20  | ␠           |
        +| .         | 0x2E  | .          |
        +
        +File names can also not begin with the following characters.
        +These only get replaced if they are the first character in the name:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| SP        | 0x20  | ␠           |
        +| ~         | 0x7E  | ~          |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +### Deleting files
        +
        +Any files you delete with rclone will end up in the trash.  Microsoft
        +doesn't provide an API to permanently delete files, nor to empty the
        +trash, so you will have to do that with one of Microsoft's apps or via
        +the OneDrive website.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to onedrive (Microsoft OneDrive).
        +
        +#### --onedrive-client-id
        +
        +OAuth Client Id.
        +
         Leave blank normally.
        -Enter a string value. Press Enter for the default ("").
        -client_secret>
        -Edit advanced config? (y/n)
        -y) Yes
        -n) No
        -y/n> n
        -Remote config
        -Use web browser to automatically authenticate rclone with remote?
        - * Say Y if the machine running rclone has a web browser you can use
        - * Say N if running rclone on a (remote) machine without web browser access
        -If not sure try Y. If Y failed, try N.
        -y) Yes
        -n) No
        -y/n> y
        -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
        -Log in and authorize rclone for access
        -Waiting for code...
        -Got code
        -Choose a number from below, or type in an existing value
        - 1 / OneDrive Personal or Business
        -   \ "onedrive"
        - 2 / Sharepoint site
        -   \ "sharepoint"
        - 3 / Type in driveID
        -   \ "driveid"
        - 4 / Type in SiteID
        -   \ "siteid"
        - 5 / Search a Sharepoint site
        -   \ "search"
        -Your choice> 1
        -Found 1 drives, please select the one you want to use:
        -0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
        -Chose drive to use:> 0
        -Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
        -Is that okay?
        -y) Yes
        -n) No
        -y/n> y
        ---------------------
        -[remote]
        -type = onedrive
        -token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
        -drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
        -drive_type = business
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        See the remote setup docs for how to set it up on a machine with no Internet browser available.

        -

        Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

        -

        Once configured you can then use rclone like this,

        -

        List directories in top level of your OneDrive

        -
        rclone lsd remote:
        -

        List all the files in your OneDrive

        -
        rclone ls remote:
        -

        To copy a local directory to an OneDrive directory called backup

        -
        rclone copy /home/source remote:backup
        -

        Getting your own Client ID and Key

        -

        rclone uses a default Client ID when talking to OneDrive, unless a custom client_id is specified in the config. The default Client ID and Key are shared by all rclone users when performing requests.

        -

        You may choose to create and use your own Client ID, in case the default one does not work well for you. For example, you might see throttling.

        -

        Creating Client ID for OneDrive Personal

        -

        To create your own Client ID, please follow these steps:

        -
          -
        1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click New registration.
        2. -
        3. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI, then type (do not copy and paste) http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use.
        4. -
        5. Under manage select Certificates & secrets, click New client secret. Enter a description (can be anything) and set Expires to 24 months. Copy and keep that secret Value for later use (you won't be able to see this value afterwards).
        6. -
        7. Under manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions.
        8. -
        9. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read and Sites.Read.All (if custom access scopes are configured, select the permissions accordingly). Once selected click Add permissions at the bottom.
        10. -
        -

        Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.

        -

        The access_scopes option allows you to configure the permissions requested by rclone. See Microsoft Docs for more information about the different scopes.

        -

        The Sites.Read.All permission is required if you need to search SharePoint sites when configuring the remote. However, if that permission is not assigned, you need to exclude Sites.Read.All from your access scopes or set disable_site_permission option to true in the advanced options.

        -

        Creating Client ID for OneDrive Business

        -

        The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. A common error is that the publisher of the App is not verified.

        -

        You may try to verify you account, or try to limit the App to your organization only, as shown below.

        -
          -
        1. Make sure to create the App with your business account.
        2. -
        3. Follow the steps above to create an App. However, we need a different account type here: Accounts in this organizational directory only (*** - Single tenant). Note that you can also change the account type after creating the App.
        4. -
        5. Find the tenant ID of your organization.
        6. -
        7. In the rclone config, set auth_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize.
        8. -
        9. In the rclone config, set token_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token.
        10. -
        -

        Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.

        -

        Modification time and hashes

        -

        OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

        -

        OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.

        -

        Before rclone 1.62 the default hash for Onedrive Personal was SHA1. For rclone 1.62 and above the default for all Onedrive backends is QuickXorHash.

        -

        Starting from July 2023 SHA1 support is being phased out in Onedrive Personal in favour of QuickXorHash. If necessary the --onedrive-hash-type flag (or hash_type config option) can be used to select SHA1 during the transition period if this is important your workflow.

        -

        For all types of OneDrive you can use the --checksum flag.

        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        "0x22
        *0x2A
        :0x3A
        <0x3C
        >0x3E
        ?0x3F
        \0x5C
        |0x7C
        -

        File names can also not end with the following characters. These only get replaced if they are the last character in the name:

        - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        SP0x20
        .0x2E
        -

        File names can also not begin with the following characters. These only get replaced if they are the first character in the name:

        - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        SP0x20
        ~0x7E
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Deleting files

        -

        Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

        -

        Standard options

        -

        Here are the Standard options specific to onedrive (Microsoft OneDrive).

        -

        --onedrive-client-id

        -

        OAuth Client Id.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_id
        • -
        • Env Var: RCLONE_ONEDRIVE_CLIENT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-client-secret

        -

        OAuth Client Secret.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_secret
        • -
        • Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-region

        -

        Choose national cloud region for OneDrive.

        -

        Properties:

        -
          -
        • Config: region
        • -
        • Env Var: RCLONE_ONEDRIVE_REGION
        • -
        • Type: string
        • -
        • Default: "global"
        • -
        • Examples: -
            -
          • "global" -
              -
            • Microsoft Cloud Global
            • -
          • -
          • "us" -
              -
            • Microsoft Cloud for US Government
            • -
          • -
          • "de" -
              -
            • Microsoft Cloud Germany
            • -
          • -
          • "cn" -
              -
            • Azure and Office 365 operated by Vnet Group in China
            • -
          • -
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to onedrive (Microsoft OneDrive).

        -

        --onedrive-token

        -

        OAuth Access Token as a JSON blob.

        -

        Properties:

        -
          -
        • Config: token
        • -
        • Env Var: RCLONE_ONEDRIVE_TOKEN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-auth-url

        -

        Auth server URL.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: auth_url
        • -
        • Env Var: RCLONE_ONEDRIVE_AUTH_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-token-url

        -

        Token server url.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: token_url
        • -
        • Env Var: RCLONE_ONEDRIVE_TOKEN_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-chunk-size

        -

        Chunk size to upload files with - must be multiple of 320k (327,680 bytes).

        -

        Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 10Mi
        • -
        -

        --onedrive-drive-id

        -

        The ID of the drive to use.

        -

        Properties:

        -
          -
        • Config: drive_id
        • -
        • Env Var: RCLONE_ONEDRIVE_DRIVE_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-drive-type

        -

        The type of the drive (personal | business | documentLibrary).

        -

        Properties:

        -
          -
        • Config: drive_type
        • -
        • Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-root-folder-id

        -

        ID of the root folder.

        -

        This isn't normally needed, but in special circumstances you might know the folder ID that you wish to access but not be able to get there through a path traversal.

        -

        Properties:

        -
          -
        • Config: root_folder_id
        • -
        • Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-access-scopes

        -

        Set scopes to be requested by rclone.

        -

        Choose or manually enter a custom space separated list with all scopes, that rclone should request.

        -

        Properties:

        -
          -
        • Config: access_scopes
        • -
        • Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES
        • -
        • Type: SpaceSepList
        • -
        • Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access
        • -
        • Examples: -
            -
          • "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" -
              -
            • Read and write access to all resources
            • -
          • -
          • "Files.Read Files.Read.All Sites.Read.All offline_access" -
              -
            • Read only access to all resources
            • -
          • -
          • "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" -
              -
            • Read and write access to all resources, without the ability to browse SharePoint sites.
            • -
            • Same as if disable_site_permission was set to true
            • -
          • -
        • -
        -

        --onedrive-disable-site-permission

        -

        Disable the request for Sites.Read.All permission.

        -

        If set to true, you will no longer be able to search for a SharePoint site when configuring drive ID, because rclone will not request Sites.Read.All permission. Set it to true if your organization didn't assign Sites.Read.All permission to the application, and your organization disallows users to consent app permission request on their own.

        -

        Properties:

        -
          -
        • Config: disable_site_permission
        • -
        • Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --onedrive-expose-onenote-files

        -

        Set to make OneNote files show up in directory listings.

        -

        By default, rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.

        -

        Properties:

        -
          -
        • Config: expose_onenote_files
        • -
        • Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --onedrive-server-side-across-configs

        -

        Deprecated: use --server-side-across-configs instead.

        -

        Allow server-side operations (e.g. copy) to work across different onedrive configs.

        -

        This will only work if you are copying between two OneDrive Personal drives AND the files to copy are already shared between them. In other cases, rclone will fall back to normal copy (which will be slightly slower).

        -

        Properties:

        -
          -
        • Config: server_side_across_configs
        • -
        • Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --onedrive-list-chunk

        -

        Size of listing chunk.

        -

        Properties:

        -
          -
        • Config: list_chunk
        • -
        • Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
        • -
        • Type: int
        • -
        • Default: 1000
        • -
        -

        --onedrive-no-versions

        -

        Remove all versions on modifying operations.

        -

        Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time.

        -

        These versions take up space out of the quota.

        -

        This flag checks for versions after file upload and setting modification time and removes all but the last version.

        -

        NB Onedrive personal can't currently delete versions so don't use this flag there.

        -

        Properties:

        -
          -
        • Config: no_versions
        • -
        • Env Var: RCLONE_ONEDRIVE_NO_VERSIONS
        • -
        • Type: bool
        • -
        • Default: false
        • -
        - -

        Set the scope of the links created by the link command.

        -

        Properties:

        -
          -
        • Config: link_scope
        • -
        • Env Var: RCLONE_ONEDRIVE_LINK_SCOPE
        • -
        • Type: string
        • -
        • Default: "anonymous"
        • -
        • Examples: -
            -
          • "anonymous" -
              -
            • Anyone with the link has access, without needing to sign in.
            • -
            • This may include people outside of your organization.
            • -
            • Anonymous link support may be disabled by an administrator.
            • -
          • -
          • "organization" -
              -
            • Anyone signed into your organization (tenant) can use the link to get access.
            • -
            • Only available in OneDrive for Business and SharePoint.
            • -
          • -
        • -
        - -

        Set the type of the links created by the link command.

        -

        Properties:

        -
          -
        • Config: link_type
        • -
        • Env Var: RCLONE_ONEDRIVE_LINK_TYPE
        • -
        • Type: string
        • -
        • Default: "view"
        • -
        • Examples: -
            -
          • "view" -
              -
            • Creates a read-only link to the item.
            • -
          • -
          • "edit" -
              -
            • Creates a read-write link to the item.
            • -
          • -
          • "embed" -
              -
            • Creates an embeddable link to the item.
            • -
          • -
        • -
        - -

        Set the password for links created by the link command.

        -

        At the time of writing this only works with OneDrive personal paid accounts.

        -

        Properties:

        -
          -
        • Config: link_password
        • -
        • Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --onedrive-hash-type

        -

        Specify the hash in use for the backend.

        -

        This specifies the hash type in use. If set to "auto" it will use the default hash which is QuickXorHash.

        -

        Before rclone 1.62 an SHA1 hash was used by default for Onedrive Personal. For 1.62 and later the default is to use a QuickXorHash for all onedrive types. If an SHA1 hash is desired then set this option accordingly.

        -

        From July 2023 QuickXorHash will be the only available hash for both OneDrive for Business and OneDriver Personal.

        -

        This can be set to "none" to not use any hashes.

        -

        If the hash requested does not exist on the object, it will be returned as an empty string which is treated as a missing hash by rclone.

        -

        Properties:

        -
          -
        • Config: hash_type
        • -
        • Env Var: RCLONE_ONEDRIVE_HASH_TYPE
        • -
        • Type: string
        • -
        • Default: "auto"
        • -
        • Examples: -
            -
          • "auto" -
              -
            • Rclone chooses the best hash
            • -
          • -
          • "quickxor" -
              -
            • QuickXor
            • -
          • -
          • "sha1" -
              -
            • SHA1
            • -
          • -
          • "sha256" -
              -
            • SHA256
            • -
          • -
          • "crc32" -
              -
            • CRC32
            • -
          • -
          • "none" -
              -
            • None - don't use any hashes
            • -
          • -
        • -
        -

        --onedrive-av-override

        -

        Allows download of files the server thinks has a virus.

        -

        The onedrive/sharepoint server may check files uploaded with an Anti Virus checker. If it detects any potential viruses or malware it will block download of the file.

        -

        In this case you will see a message like this

        -
        server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: 
        -

        If you are 100% sure you want to download this file anyway then use the --onedrive-av-override flag, or av_override = true in the config file.

        -

        Properties:

        -
          -
        • Config: av_override
        • -
        • Env Var: RCLONE_ONEDRIVE_AV_OVERRIDE
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --onedrive-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_ONEDRIVE_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
        • -
        -

        Limitations

        -

        If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.

        -

        Naming

        -

        Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

        -

        There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

        -

        File sizes

        -

        The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business (Updated 13 Jan 2021).

        -

        Path length

        -

        The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.

        -

        Number of files

        -

        OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.

        -

        An official document about the limitations for different types of OneDrive can be found here.

        -

        Versions

        -

        Every change in a file OneDrive causes the service to create a new version of the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.

        -

        For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.

        -

        You can use the rclone cleanup command (see below) to remove all old versions.

        -

        Or you can set the no_versions parameter to true and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it.

        -

        Note At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and no_versions should not be used on Onedrive Personal.

        -

        Disabling versioning

        -

        Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:

        -
          -
        1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)
        2. -
        3. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
        4. -
        5. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)
        6. -
        7. Set-SPOTenant -EnableMinimumVersionRequirement $False
        8. -
        9. Disconnect-SPOService (to disconnect from the server)
        10. -
        -

        Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.

        -

        User Weropol has found a method to disable versioning on OneDrive

        -
          -
        1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
        2. -
        3. Click Site settings.
        4. -
        5. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
        6. -
        7. Click Customize "Documents".
        8. -
        9. Click General Settings > Versioning Settings.
        10. -
        11. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
        12. -
        13. Apply the changes by clicking OK.
        14. -
        15. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
        16. -
        17. Restore the versioning settings after using rclone. (Optional)
        18. -
        -

        Cleanup

        -

        OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports --interactive/i or --dry-run which is a great way to see what it would do.

        -
        rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
        -rclone cleanup remote:path/subdir               # unconditionally remove all old version for path/subdir
        -

        NB Onedrive personal can't currently delete versions

        -

        Troubleshooting

        -

        Excessive throttling or blocked on SharePoint

        -

        If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"

        -

        The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online

        -

        Unexpected file size/hash differences on Sharepoint

        -

        It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:

        -
        --ignore-checksum --ignore-size
        -

        Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.

        -

        Replacing/deleting existing files on Sharepoint gets "item not found"

        -

        It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

        -
        --backup-dir mysharepoint:rclone-backup-dir
        -

        access_denied (AADSTS65005)

        -
        Error: access_denied
        -Code: AADSTS65005
        -Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
        -

        This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

        -

        However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint

        -

        invalid_grant (AADSTS50076)

        -
        Error: invalid_grant
        -Code: AADSTS50076
        -Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
        -

        If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.

        - -

        On Sharepoint and OneDrive for Business, rclone link may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow public links to be made for the organisation/sharepoint library. To fix the permissions as an admin, take a look at the docs: 1, 2.

        -

        Can not access Shared with me files

        -

        Shared with me files is not supported by rclone currently, but there is a workaround:

        -
          -
        1. Visit https://onedrive.live.com
        2. -
        3. Right click a item in Shared, then click Add shortcut to My files in the context make_shortcut
        4. -
        5. The shortcut will appear in My files, you can access it with rclone, it behaves like a normal folder/file. in_my_files rclone_mount
        6. -
        -

        Live Photos uploaded from iOS (small video clips in .heic files)

        -

        The iOS OneDrive app introduced upload and storage of Live Photos in 2020. The usage and download of these uploaded Live Photos is unfortunately still work-in-progress and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.

        -

        The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.

        -

        The different sizes will cause rclone copy/sync to repeatedly recopy unmodified photos something like this:

        -
        DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
        -DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
        -INFO  : 20230203_123826234_iOS.heic: Copied (replaced existing)
        -

        These recopies can be worked around by adding --ignore-size. Please note that this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations.

        -

        The different sizes will also cause rclone check to report size errors something like this:

        -
        ERROR : 20230203_123826234_iOS.heic: sizes differ
        -

        These check errors can be suppressed by adding --ignore-size.

        -

        The different sizes will also cause rclone mount to fail downloading with an error something like this:

        -
        ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
        -

        or like this when using --cache-mode=full:

        -
        INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
        -ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
        -

        OpenDrive

        -

        Paths are specified as remote:path

        -

        Paths may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        n) New remote
        -d) Delete remote
        -q) Quit config
        -e/n/d/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / OpenDrive
        -   \ "opendrive"
        -[snip]
        -Storage> opendrive
        -Username
        -username>
        -Password
        -y) Yes type in my own password
        -g) Generate random password
        -y/g> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        ---------------------
        -[remote]
        -username =
        -password = *** ENCRYPTED ***
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        List directories in top level of your OpenDrive

        -
        rclone lsd remote:
        -

        List all the files in your OpenDrive

        -
        rclone ls remote:
        -

        To copy a local directory to an OpenDrive directory called backup

        -
        rclone copy /home/source remote:backup
        -

        Modified time and MD5SUMs

        -

        OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

        -

        Restricted filename characters

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        NUL0x00
        /0x2F
        "0x22
        *0x2A
        :0x3A
        <0x3C
        >0x3E
        ?0x3F
        \0x5C
        |0x7C
        -

        File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        SP0x20
        HT0x09
        LF0x0A
        VT0x0B
        CR0x0D
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Standard options

        -

        Here are the Standard options specific to opendrive (OpenDrive).

        -

        --opendrive-username

        -

        Username.

        -

        Properties:

        -
          -
        • Config: username
        • -
        • Env Var: RCLONE_OPENDRIVE_USERNAME
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --opendrive-password

        -

        Password.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: password
        • -
        • Env Var: RCLONE_OPENDRIVE_PASSWORD
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to opendrive (OpenDrive).

        -

        --opendrive-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_OPENDRIVE_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
        • -
        -

        --opendrive-chunk-size

        -

        Files will be uploaded in chunks this size.

        -

        Note that these chunks are buffered in memory so increasing them will increase memory use.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 10Mi
        • -
        -

        Limitations

        -

        Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

        -

        There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

        -

        rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

        -

        See List of backends that do not support rclone about and rclone about

        -

        Oracle Object Storage

        -

        Oracle Object Storage Overview

        -

        Oracle Object Storage FAQ

        -

        Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

        -

        Configuration

        -

        Here is an example of making an oracle object storage configuration. rclone config walks you through it.

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        n) New remote
        -d) Delete remote
        -r) Rename remote
        -c) Copy remote
        -s) Set configuration password
        -q) Quit config
        -e/n/d/r/c/s/q> n
         
        -Enter name for new remote.
        -name> remote
        +Properties:
         
        -Option Storage.
        -Type of storage to configure.
        -Choose a number from below, or type in your own value.
        -[snip]
        -XX / Oracle Cloud Infrastructure Object Storage
        -   \ (oracleobjectstorage)
        -Storage> oracleobjectstorage
        +- Config:      client_id
        +- Env Var:     RCLONE_ONEDRIVE_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-client-secret
        +
        +OAuth Client Secret.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_ONEDRIVE_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-region
        +
        +Choose national cloud region for OneDrive.
        +
        +Properties:
        +
        +- Config:      region
        +- Env Var:     RCLONE_ONEDRIVE_REGION
        +- Type:        string
        +- Default:     "global"
        +- Examples:
        +    - "global"
        +        - Microsoft Cloud Global
        +    - "us"
        +        - Microsoft Cloud for US Government
        +    - "de"
        +        - Microsoft Cloud Germany
        +    - "cn"
        +        - Azure and Office 365 operated by Vnet Group in China
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to onedrive (Microsoft OneDrive).
        +
        +#### --onedrive-token
        +
        +OAuth Access Token as a JSON blob.
        +
        +Properties:
        +
        +- Config:      token
        +- Env Var:     RCLONE_ONEDRIVE_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-auth-url
        +
        +Auth server URL.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      auth_url
        +- Env Var:     RCLONE_ONEDRIVE_AUTH_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-token-url
        +
        +Token server url.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      token_url
        +- Env Var:     RCLONE_ONEDRIVE_TOKEN_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-chunk-size
        +
        +Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
        +
        +Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and
        +should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\"
        +Note that the chunks will be buffered into memory.
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_ONEDRIVE_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     10Mi
        +
        +#### --onedrive-drive-id
        +
        +The ID of the drive to use.
        +
        +Properties:
        +
        +- Config:      drive_id
        +- Env Var:     RCLONE_ONEDRIVE_DRIVE_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-drive-type
        +
        +The type of the drive (personal | business | documentLibrary).
        +
        +Properties:
        +
        +- Config:      drive_type
        +- Env Var:     RCLONE_ONEDRIVE_DRIVE_TYPE
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-root-folder-id
        +
        +ID of the root folder.
        +
        +This isn't normally needed, but in special circumstances you might
        +know the folder ID that you wish to access but not be able to get
        +there through a path traversal.
        +
        +
        +Properties:
        +
        +- Config:      root_folder_id
        +- Env Var:     RCLONE_ONEDRIVE_ROOT_FOLDER_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-access-scopes
        +
        +Set scopes to be requested by rclone.
        +
        +Choose or manually enter a custom space separated list with all scopes, that rclone should request.
        +
        +
        +Properties:
        +
        +- Config:      access_scopes
        +- Env Var:     RCLONE_ONEDRIVE_ACCESS_SCOPES
        +- Type:        SpaceSepList
        +- Default:     Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access
        +- Examples:
        +    - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access"
        +        - Read and write access to all resources
        +    - "Files.Read Files.Read.All Sites.Read.All offline_access"
        +        - Read only access to all resources
        +    - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access"
        +        - Read and write access to all resources, without the ability to browse SharePoint sites. 
        +        - Same as if disable_site_permission was set to true
        +
        +#### --onedrive-disable-site-permission
        +
        +Disable the request for Sites.Read.All permission.
        +
        +If set to true, you will no longer be able to search for a SharePoint site when
        +configuring drive ID, because rclone will not request Sites.Read.All permission.
        +Set it to true if your organization didn't assign Sites.Read.All permission to the
        +application, and your organization disallows users to consent app permission
        +request on their own.
        +
        +Properties:
        +
        +- Config:      disable_site_permission
        +- Env Var:     RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION
        +- Type:        bool
        +- Default:     false
        +
        +#### --onedrive-expose-onenote-files
        +
        +Set to make OneNote files show up in directory listings.
        +
        +By default, rclone will hide OneNote files in directory listings because
        +operations like "Open" and "Update" won't work on them.  But this
        +behaviour may also prevent you from deleting them.  If you want to
        +delete OneNote files or otherwise want them to show up in directory
        +listing, set this option.
        +
        +Properties:
        +
        +- Config:      expose_onenote_files
        +- Env Var:     RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
        +- Type:        bool
        +- Default:     false
        +
        +#### --onedrive-server-side-across-configs
        +
        +Deprecated: use --server-side-across-configs instead.
        +
        +Allow server-side operations (e.g. copy) to work across different onedrive configs.
        +
        +This will only work if you are copying between two OneDrive *Personal* drives AND
        +the files to copy are already shared between them.  In other cases, rclone will
        +fall back to normal copy (which will be slightly slower).
        +
        +Properties:
        +
        +- Config:      server_side_across_configs
        +- Env Var:     RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
        +- Type:        bool
        +- Default:     false
        +
        +#### --onedrive-list-chunk
        +
        +Size of listing chunk.
        +
        +Properties:
        +
        +- Config:      list_chunk
        +- Env Var:     RCLONE_ONEDRIVE_LIST_CHUNK
        +- Type:        int
        +- Default:     1000
        +
        +#### --onedrive-no-versions
        +
        +Remove all versions on modifying operations.
        +
        +Onedrive for business creates versions when rclone uploads new files
        +overwriting an existing one and when it sets the modification time.
        +
        +These versions take up space out of the quota.
        +
        +This flag checks for versions after file upload and setting
        +modification time and removes all but the last version.
        +
        +**NB** Onedrive personal can't currently delete versions so don't use
        +this flag there.
        +
        +
        +Properties:
        +
        +- Config:      no_versions
        +- Env Var:     RCLONE_ONEDRIVE_NO_VERSIONS
        +- Type:        bool
        +- Default:     false
        +
        +#### --onedrive-link-scope
        +
        +Set the scope of the links created by the link command.
        +
        +Properties:
        +
        +- Config:      link_scope
        +- Env Var:     RCLONE_ONEDRIVE_LINK_SCOPE
        +- Type:        string
        +- Default:     "anonymous"
        +- Examples:
        +    - "anonymous"
        +        - Anyone with the link has access, without needing to sign in.
        +        - This may include people outside of your organization.
        +        - Anonymous link support may be disabled by an administrator.
        +    - "organization"
        +        - Anyone signed into your organization (tenant) can use the link to get access.
        +        - Only available in OneDrive for Business and SharePoint.
        +
        +#### --onedrive-link-type
        +
        +Set the type of the links created by the link command.
        +
        +Properties:
        +
        +- Config:      link_type
        +- Env Var:     RCLONE_ONEDRIVE_LINK_TYPE
        +- Type:        string
        +- Default:     "view"
        +- Examples:
        +    - "view"
        +        - Creates a read-only link to the item.
        +    - "edit"
        +        - Creates a read-write link to the item.
        +    - "embed"
        +        - Creates an embeddable link to the item.
        +
        +#### --onedrive-link-password
        +
        +Set the password for links created by the link command.
        +
        +At the time of writing this only works with OneDrive personal paid accounts.
        +
        +
        +Properties:
        +
        +- Config:      link_password
        +- Env Var:     RCLONE_ONEDRIVE_LINK_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +#### --onedrive-hash-type
        +
        +Specify the hash in use for the backend.
        +
        +This specifies the hash type in use. If set to "auto" it will use the
        +default hash which is QuickXorHash.
        +
        +Before rclone 1.62 an SHA1 hash was used by default for Onedrive
        +Personal. For 1.62 and later the default is to use a QuickXorHash for
        +all onedrive types. If an SHA1 hash is desired then set this option
        +accordingly.
        +
        +From July 2023 QuickXorHash will be the only available hash for
        +both OneDrive for Business and OneDriver Personal.
        +
        +This can be set to "none" to not use any hashes.
        +
        +If the hash requested does not exist on the object, it will be
        +returned as an empty string which is treated as a missing hash by
        +rclone.
        +
        +
        +Properties:
        +
        +- Config:      hash_type
        +- Env Var:     RCLONE_ONEDRIVE_HASH_TYPE
        +- Type:        string
        +- Default:     "auto"
        +- Examples:
        +    - "auto"
        +        - Rclone chooses the best hash
        +    - "quickxor"
        +        - QuickXor
        +    - "sha1"
        +        - SHA1
        +    - "sha256"
        +        - SHA256
        +    - "crc32"
        +        - CRC32
        +    - "none"
        +        - None - don't use any hashes
        +
        +#### --onedrive-av-override
        +
        +Allows download of files the server thinks has a virus.
        +
        +The onedrive/sharepoint server may check files uploaded with an Anti
        +Virus checker. If it detects any potential viruses or malware it will
        +block download of the file.
        +
        +In this case you will see a message like this
        +
        +    server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: 
        +
        +If you are 100% sure you want to download this file anyway then use
        +the --onedrive-av-override flag, or av_override = true in the config
        +file.
        +
        +
        +Properties:
        +
        +- Config:      av_override
        +- Env Var:     RCLONE_ONEDRIVE_AV_OVERRIDE
        +- Type:        bool
        +- Default:     false
        +
        +#### --onedrive-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_ONEDRIVE_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
        +
        +
        +
        +## Limitations
        +
        +If you don't use rclone for 90 days the refresh token will
        +expire. This will result in authorization problems. This is easy to
        +fix by running the `rclone config reconnect remote:` command to get a
        +new token and refresh token.
        +
        +### Naming
        +
        +Note that OneDrive is case insensitive so you can't have a
        +file called "Hello.doc" and one called "hello.doc".
        +
        +There are quite a few characters that can't be in OneDrive file
        +names.  These can't occur on Windows platforms, but on non-Windows
        +platforms they are common.  Rclone will map these names to and from an
        +identical looking unicode equivalent.  For example if a file has a `?`
        +in it will be mapped to `?` instead.
        +
        +### File sizes
        +
        +The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
        +
        +### Path length
        +
        +The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
        +
        +### Number of files
        +
        +OneDrive seems to be OK with at least 50,000 files in a folder, but at
        +100,000 rclone will get errors listing the directory like `couldn’t
        +list files: UnknownError:`.  See
        +[#2707](https://github.com/rclone/rclone/issues/2707) for more info.
        +
        +An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
        +
        +## Versions
        +
        +Every change in a file OneDrive causes the service to create a new
        +version of the file.  This counts against a users quota.  For
        +example changing the modification time of a file creates a second
        +version, so the file apparently uses twice the space.
        +
        +For example the `copy` command is affected by this as rclone copies
        +the file and then afterwards sets the modification time to match the
        +source file which uses another version.
        +
        +You can use the `rclone cleanup` command (see below) to remove all old
        +versions.
        +
        +Or you can set the `no_versions` parameter to `true` and rclone will
        +remove versions after operations which create new versions. This takes
        +extra transactions so only enable it if you need it.
        +
        +**Note** At the time of writing Onedrive Personal creates versions
        +(but not for setting the modification time) but the API for removing
        +them returns "API not found" so cleanup and `no_versions` should not
        +be used on Onedrive Personal.
        +
        +### Disabling versioning
        +
        +Starting October 2018, users will no longer be able to
        +disable versioning by default. This is because Microsoft has brought
        +an
        +[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390)
        +to the mechanism. To change this new default setting, a PowerShell
        +command is required to be run by a SharePoint admin. If you are an
        +admin, you can run these commands in PowerShell to change that
        +setting:
        +
        +1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already)
        +2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
        +3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
        +4. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
        +5. `Disconnect-SPOService` (to disconnect from the server)
        +
        +*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.*
        +
        +User [Weropol](https://github.com/Weropol) has found a method to disable
        +versioning on OneDrive
        +
        +1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
        +2. Click Site settings.
        +3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
        +4. Click Customize "Documents".
        +5. Click General Settings > Versioning Settings.
        +6. Under Document Version History select the option No versioning.
        +Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
        +7. Apply the changes by clicking OK.
        +8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
        +9. Restore the versioning settings after using rclone. (Optional)
        +
        +## Cleanup
        +
        +OneDrive supports `rclone cleanup` which causes rclone to look through
        +every file under the path supplied and delete all version but the
        +current version. Because this involves traversing all the files, then
        +querying each file for versions it can be quite slow. Rclone does
        +`--checkers` tests in parallel. The command also supports `--interactive`/`i`
        +or `--dry-run` which is a great way to see what it would do.
        +
        +    rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
        +    rclone cleanup remote:path/subdir               # unconditionally remove all old version for path/subdir
        +
        +**NB** Onedrive personal can't currently delete versions
        +
        +## Troubleshooting ##
        +
        +### Excessive throttling or blocked on SharePoint
        +
        +If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`
        +
        +The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
        +
        +### Unexpected file size/hash differences on Sharepoint ####
        +
        +It is a
        +[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631)
        +issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies
        +uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and
        +hash checks to fail. There are also other situations that will cause OneDrive to
        +report inconsistent file sizes. To use rclone with such
        +affected files on Sharepoint, you
        +may disable these checks with the following command line arguments:
        +
        +

        --ignore-checksum --ignore-size

        +
        
        +Alternatively, if you have write access to the OneDrive files, it may be possible
        +to fix this problem for certain files, by attempting the steps below.
        +Open the web interface for [OneDrive](https://onedrive.live.com) and find the
        +affected files (which will be in the error messages/log for rclone). Simply click on
        +each of these files, causing OneDrive to open them on the web. This will cause each
        +file to be converted in place to a format that is functionally equivalent
        +but which will no longer trigger the size discrepancy. Once all problematic files
        +are converted you will no longer need the ignore options above.
        +
        +### Replacing/deleting existing files on Sharepoint gets "item not found" ####
        +
        +It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
        +that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
        +found" errors when users try to replace or delete uploaded files; this seems to
        +mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use
        +the `--backup-dir <BACKUP_DIR>` command line argument so rclone moves the
        +files to be replaced/deleted into a given backup directory (instead of directly
        +replacing/deleting them). For example, to instruct rclone to move the files into
        +the directory `rclone-backup-dir` on backend `mysharepoint`, you may use:
        +
        +

        --backup-dir mysharepoint:rclone-backup-dir

        +
        
        +### access\_denied (AADSTS65005) ####
        +
        +

        Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

        +
        
        +This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
        +
        +However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint
        +
        +### invalid\_grant (AADSTS50076) ####
        +
        +

        Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

        +
        
        +If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
        +
        +### Invalid request when making public links ####
        +
        +On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid
        +request" error. A possible cause is that the organisation admin didn't allow
        +public links to be made for the organisation/sharepoint library. To fix the
        +permissions as an admin, take a look at the docs:
        +[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off),
        +[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3).
        +
        +### Can not access `Shared` with me files
        +
        +Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround:
        +
        +1. Visit [https://onedrive.live.com](https://onedrive.live.com/)
        +2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context
        +    ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)")
        +3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file.
        +    ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)")
        +    ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)")
        +
        +### Live Photos uploaded from iOS (small video clips in .heic files)
        +
        +The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) 
        +of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. 
        +The usage and download of these uploaded Live Photos is unfortunately still work-in-progress 
        +and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
        +
        +The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. 
        +Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. 
        +The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
        +
        +The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this:
        +
        +    DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
        +    DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
        +    INFO  : 20230203_123826234_iOS.heic: Copied (replaced existing)
        +
        +These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, 
        +and relies on modification dates being correctly updated on all files in all situations.
        +
        +The different sizes will also cause `rclone check` to report size errors something like this:
        +
        +    ERROR : 20230203_123826234_iOS.heic: sizes differ
        +
        +These check errors can be suppressed by adding `--ignore-size`.
        +
        +The different sizes will also cause `rclone mount` to fail downloading with an error something like this:
        +
        +    ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
        +
        +or like this when using `--cache-mode=full`:
        +
        +    INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
        +    ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
        +
        +#  OpenDrive
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +
          +
        1. New remote
        2. +
        3. Delete remote
        4. +
        5. Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenDrive  "opendrive" [snip] Storage> opendrive Username username> Password
        6. +
        7. Yes type in my own password
        8. +
        9. Generate random password y/g> y Enter the password: password: Confirm the password: password: -------------------- [remote] username = password = *** ENCRYPTED *** --------------------
        10. +
        11. Yes this is OK
        12. +
        13. Edit this remote
        14. +
        15. Delete this remote y/e/d> y
        16. +
        +
        
        +List directories in top level of your OpenDrive
        +
        +    rclone lsd remote:
        +
        +List all the files in your OpenDrive
        +
        +    rclone ls remote:
        +
        +To copy a local directory to an OpenDrive directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Modified time and MD5SUMs
        +
        +OpenDrive allows modification times to be set on objects accurate to 1
        +second. These will be used to detect whether objects need syncing or
        +not.
        +
        +### Restricted filename characters
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| NUL       | 0x00  | ␀           |
        +| /         | 0x2F  | /          |
        +| "         | 0x22  | "          |
        +| *         | 0x2A  | *          |
        +| :         | 0x3A  | :          |
        +| <         | 0x3C  | <          |
        +| >         | 0x3E  | >          |
        +| ?         | 0x3F  | ?          |
        +| \         | 0x5C  | \          |
        +| \|        | 0x7C  | |          |
        +
        +File names can also not begin or end with the following characters.
        +These only get replaced if they are the first or last character in the name:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| SP        | 0x20  | ␠           |
        +| HT        | 0x09  | ␉           |
        +| LF        | 0x0A  | ␊           |
        +| VT        | 0x0B  | ␋           |
        +| CR        | 0x0D  | ␍           |
        +
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to opendrive (OpenDrive).
        +
        +#### --opendrive-username
        +
        +Username.
        +
        +Properties:
        +
        +- Config:      username
        +- Env Var:     RCLONE_OPENDRIVE_USERNAME
        +- Type:        string
        +- Required:    true
        +
        +#### --opendrive-password
        +
        +Password.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      password
        +- Env Var:     RCLONE_OPENDRIVE_PASSWORD
        +- Type:        string
        +- Required:    true
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to opendrive (OpenDrive).
        +
        +#### --opendrive-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_OPENDRIVE_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
        +
        +#### --opendrive-chunk-size
        +
        +Files will be uploaded in chunks this size.
        +
        +Note that these chunks are buffered in memory so increasing them will
        +increase memory use.
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_OPENDRIVE_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     10Mi
        +
        +
        +
        +## Limitations
        +
        +Note that OpenDrive is case insensitive so you can't have a
        +file called "Hello.doc" and one called "hello.doc".
        +
        +There are quite a few characters that can't be in OpenDrive file
        +names.  These can't occur on Windows platforms, but on non-Windows
        +platforms they are common.  Rclone will map these names to and from an
        +identical looking unicode equivalent.  For example if a file has a `?`
        +in it will be mapped to `?` instead.
        +
        +`rclone about` is not supported by the OpenDrive backend. Backends without
        +this capability cannot determine free space for an rclone mount or
        +use policy `mfs` (most free space) as a member of an rclone union
        +remote.
        +
        +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
        +
        +#  Oracle Object Storage
        +- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
        +- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
        +- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf)
        +
        +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.)  You may put subdirectories in 
        +too, e.g. `remote:bucket/path/to/dir`.
        +
        +Sample command to transfer local artifacts to remote:bucket in oracle object storage:
        +
        +`rclone -vvv  --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64  --retries 2  --oos-chunk-size 10Mi --oos-upload-concurrency 10000  --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts  remote:bucket -vv`
        +
        +## Configuration
        +
        +Here is an example of making an oracle object storage configuration. `rclone config` walks you 
        +through it.
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +
        +
          +
        1. New remote
        2. +
        3. Delete remote
        4. +
        5. Rename remote
        6. +
        7. Copy remote
        8. +
        9. Set configuration password
        10. +
        11. Quit config e/n/d/r/c/s/q> n
        12. +
        +

        Enter name for new remote. name> remote

        +

        Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] XX / Oracle Cloud Infrastructure Object Storage  (oracleobjectstorage) Storage> oracleobjectstorage

        +

        Option provider. Choose your Auth Provider Choose a number from below, or type in your own string value. Press Enter for the default (env_auth). 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins  (env_auth) / use an OCI user and an API key for authentication. 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm  (user_principal_auth) / use instance principals to authorize an instance to make API calls. 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm  (instance_principal_auth) 4 / use resource principals to make API calls  (resource_principal_auth) 5 / no credentials needed, this is typically for reading public buckets  (no_auth) provider> 2

        +

        Option namespace. Object storage namespace Enter a value. namespace> idbamagbg734

        +

        Option compartment. Object storage compartment OCID Enter a value. compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba

        +

        Option region. Object storage Region Enter a value. region> us-ashburn-1

        +

        Option endpoint. Endpoint for Object storage API. Leave blank to use the default endpoint for the region. Enter a value. Press Enter to leave empty. endpoint>

        +

        Option config_file. Full Path to OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (~/.oci/config). 1 / oci configuration file location  (~/.oci/config) config_file> /etc/oci/dev.conf

        +

        Option config_profile. Profile name inside OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (Default). 1 / Use the default profile  (Default) config_profile> Test

        +

        Edit advanced config? y) Yes n) No (default) y/n> n

        +

        Configuration complete. Options: - type: oracleobjectstorage - namespace: idbamagbg734 - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - region: us-ashburn-1 - provider: user_principal_auth - config_file: /etc/oci/dev.conf - config_profile: Test Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +See all buckets
        +
        +    rclone lsd remote:
        +
        +Create a new bucket
        +
        +    rclone mkdir remote:bucket
        +
        +List the contents of a bucket
        +
        +    rclone ls remote:bucket
        +    rclone ls remote:bucket --max-depth 1
        +
        +## Authentication Providers 
        +
        +OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication 
        +methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) 
        +These choices can be specified in the rclone config file.
        +
        +Rclone supports the following OCI authentication provider.
        +
        +    User Principal
        +    Instance Principal
        +    Resource Principal
        +    No authentication
        +
        +### User Principal
        +Sample rclone config file for Authentication Provider User Principal:
        +
        +    [oos]
        +    type = oracleobjectstorage
        +    namespace = id<redacted>34
        +    compartment = ocid1.compartment.oc1..aa<redacted>ba
        +    region = us-ashburn-1
        +    provider = user_principal_auth
        +    config_file = /home/opc/.oci/config
        +    config_profile = Default
        +
        +Advantages:
        +- One can use this method from any server within OCI or on-premises or from other cloud provider.
        +
        +Considerations:
        +- you need to configure user’s privileges / policy to allow access to object storage
        +- Overhead of managing users and keys.
        +- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
        +
        +###  Instance Principal
        +An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. 
        +With this approach no credentials have to be stored and managed.
        +
        +Sample rclone configuration file for Authentication Provider Instance Principal:
        +
        +    [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
        +    [oos]
        +    type = oracleobjectstorage
        +    namespace = id<redacted>fn
        +    compartment = ocid1.compartment.oc1..aa<redacted>k7a
        +    region = us-ashburn-1
        +    provider = instance_principal_auth
        +
        +Advantages:
        +
        +- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute 
        +  instances or rotate the credentials.
        +- You don’t need to deal with users and keys.
        +- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, 
        +  using kms etc.
        +
        +Considerations:
        +
        +- You need to configure a dynamic group having this instance as member and add policy to read object storage to that 
        +  dynamic group.
        +- Everyone who has access to this machine can execute the CLI commands.
        +- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
        +
        +### Resource Principal
        +Resource principal auth is very similar to instance principal auth but used for resources that are not 
        +compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). 
        +To use resource principal ensure Rclone process is started with these environment variables set in its process.
        +
        +    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
        +    export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
        +    export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
        +    export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
        +
        +Sample rclone configuration file for Authentication Provider Resource Principal:
        +
        +    [oos]
        +    type = oracleobjectstorage
        +    namespace = id<redacted>34
        +    compartment = ocid1.compartment.oc1..aa<redacted>ba
        +    region = us-ashburn-1
        +    provider = resource_principal_auth
        +
        +### No authentication
        +Public buckets do not require any authentication mechanism to read objects.
        +Sample rclone configuration file for No authentication:
        +    
        +    [oos]
        +    type = oracleobjectstorage
        +    namespace = id<redacted>34
        +    compartment = ocid1.compartment.oc1..aa<redacted>ba
        +    region = us-ashburn-1
        +    provider = no_auth
        +
        +## Options
        +### Modified time
        +
        +The modified time is stored as metadata on the object as
        +`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
        +
        +If the modification time needs to be updated rclone will attempt to perform a server
        +side copy to update the modification if the object can be copied in a single part.
        +In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
        +
        +Note that reading this from the object takes an additional `HEAD` request as the metadata
        +isn't returned in object listings.
        +
        +### Multipart uploads
        +
        +rclone supports multipart uploads with OOS which means that it can
        +upload files bigger than 5 GiB.
        +
        +Note that files uploaded *both* with multipart upload *and* through
        +crypt remotes do not have MD5 sums.
        +
        +rclone switches from single part uploads to multipart uploads at the
        +point specified by `--oos-upload-cutoff`.  This can be a maximum of 5 GiB
        +and a minimum of 0 (ie always upload multipart files).
        +
        +The chunk sizes used in the multipart upload are specified by
        +`--oos-chunk-size` and the number of chunks uploaded concurrently is
        +specified by `--oos-upload-concurrency`.
        +
        +Multipart uploads will use `--transfers` * `--oos-upload-concurrency` *
        +`--oos-chunk-size` extra memory.  Single part uploads to not use extra
        +memory.
        +
        +Single part transfers can be faster than multipart transfers or slower
        +depending on your latency from oos - the more latency, the more likely
        +single part transfers will be faster.
        +
        +Increasing `--oos-upload-concurrency` will increase throughput (8 would
        +be a sensible value) and increasing `--oos-chunk-size` also increases
        +throughput (16M would be sensible).  Increasing either of these will
        +use more memory.  The default values are high enough to gain most of
        +the possible performance without using too much memory.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
        +
        +#### --oos-provider
         
        -Option provider.
         Choose your Auth Provider
        -Choose a number from below, or type in your own string value.
        -Press Enter for the default (env_auth).
        - 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
        -   \ (env_auth)
        -   / use an OCI user and an API key for authentication.
        - 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
        -   | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
        -   \ (user_principal_auth)
        -   / use instance principals to authorize an instance to make API calls. 
        - 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. 
        -   | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
        -   \ (instance_principal_auth)
        - 4 / use resource principals to make API calls
        -   \ (resource_principal_auth)
        - 5 / no credentials needed, this is typically for reading public buckets
        -   \ (no_auth)
        -provider> 2
         
        -Option namespace.
        +Properties:
        +
        +- Config:      provider
        +- Env Var:     RCLONE_OOS_PROVIDER
        +- Type:        string
        +- Default:     "env_auth"
        +- Examples:
        +    - "env_auth"
        +        - automatically pickup the credentials from runtime(env), first one to provide auth wins
        +    - "user_principal_auth"
        +        - use an OCI user and an API key for authentication.
        +        - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
        +        - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
        +    - "instance_principal_auth"
        +        - use instance principals to authorize an instance to make API calls. 
        +        - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. 
        +        - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
        +    - "resource_principal_auth"
        +        - use resource principals to make API calls
        +    - "no_auth"
        +        - no credentials needed, this is typically for reading public buckets
        +
        +#### --oos-namespace
        +
         Object storage namespace
        -Enter a value.
        -namespace> idbamagbg734
         
        -Option compartment.
        +Properties:
        +
        +- Config:      namespace
        +- Env Var:     RCLONE_OOS_NAMESPACE
        +- Type:        string
        +- Required:    true
        +
        +#### --oos-compartment
        +
         Object storage compartment OCID
        -Enter a value.
        -compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
         
        -Option region.
        +Properties:
        +
        +- Config:      compartment
        +- Env Var:     RCLONE_OOS_COMPARTMENT
        +- Provider:    !no_auth
        +- Type:        string
        +- Required:    true
        +
        +#### --oos-region
        +
         Object storage Region
        -Enter a value.
        -region> us-ashburn-1
         
        -Option endpoint.
        +Properties:
        +
        +- Config:      region
        +- Env Var:     RCLONE_OOS_REGION
        +- Type:        string
        +- Required:    true
        +
        +#### --oos-endpoint
        +
         Endpoint for Object storage API.
        +
         Leave blank to use the default endpoint for the region.
        -Enter a value. Press Enter to leave empty.
        -endpoint> 
         
        -Option config_file.
        -Full Path to OCI config file
        -Choose a number from below, or type in your own string value.
        -Press Enter for the default (~/.oci/config).
        - 1 / oci configuration file location
        -   \ (~/.oci/config)
        -config_file> /etc/oci/dev.conf
        +Properties:
         
        -Option config_profile.
        -Profile name inside OCI config file
        -Choose a number from below, or type in your own string value.
        -Press Enter for the default (Default).
        - 1 / Use the default profile
        -   \ (Default)
        -config_profile> Test
        +- Config:      endpoint
        +- Env Var:     RCLONE_OOS_ENDPOINT
        +- Type:        string
        +- Required:    false
        +
        +#### --oos-config-file
        +
        +Path to OCI config file
        +
        +Properties:
        +
        +- Config:      config_file
        +- Env Var:     RCLONE_OOS_CONFIG_FILE
        +- Provider:    user_principal_auth
        +- Type:        string
        +- Default:     "~/.oci/config"
        +- Examples:
        +    - "~/.oci/config"
        +        - oci configuration file location
        +
        +#### --oos-config-profile
        +
        +Profile name inside the oci config file
        +
        +Properties:
        +
        +- Config:      config_profile
        +- Env Var:     RCLONE_OOS_CONFIG_PROFILE
        +- Provider:    user_principal_auth
        +- Type:        string
        +- Default:     "Default"
        +- Examples:
        +    - "Default"
        +        - Use the default profile
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
        +
        +#### --oos-storage-tier
        +
        +The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
        +
        +Properties:
        +
        +- Config:      storage_tier
        +- Env Var:     RCLONE_OOS_STORAGE_TIER
        +- Type:        string
        +- Default:     "Standard"
        +- Examples:
        +    - "Standard"
        +        - Standard storage tier, this is the default tier
        +    - "InfrequentAccess"
        +        - InfrequentAccess storage tier
        +    - "Archive"
        +        - Archive storage tier
        +
        +#### --oos-upload-cutoff
        +
        +Cutoff for switching to chunked upload.
        +
        +Any files larger than this will be uploaded in chunks of chunk_size.
        +The minimum is 0 and the maximum is 5 GiB.
        +
        +Properties:
        +
        +- Config:      upload_cutoff
        +- Env Var:     RCLONE_OOS_UPLOAD_CUTOFF
        +- Type:        SizeSuffix
        +- Default:     200Mi
        +
        +#### --oos-chunk-size
        +
        +Chunk size to use for uploading.
        +
        +When uploading files larger than upload_cutoff or files with unknown
        +size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded 
        +as multipart uploads using this chunk size.
        +
        +Note that "upload_concurrency" chunks of this size are buffered
        +in memory per transfer.
        +
        +If you are transferring large files over high-speed links and you have
        +enough memory, then increasing this will speed up the transfers.
        +
        +Rclone will automatically increase the chunk size when uploading a
        +large file of known size to stay below the 10,000 chunks limit.
        +
        +Files of unknown size are uploaded with the configured
        +chunk_size. Since the default chunk size is 5 MiB and there can be at
        +most 10,000 chunks, this means that by default the maximum size of
        +a file you can stream upload is 48 GiB.  If you wish to stream upload
        +larger files then you will need to increase chunk_size.
        +
        +Increasing the chunk size decreases the accuracy of the progress
        +statistics displayed with "-P" flag.
        +
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_OOS_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     5Mi
        +
        +#### --oos-max-upload-parts
        +
        +Maximum number of parts in a multipart upload.
        +
        +This option defines the maximum number of multipart chunks to use
        +when doing a multipart upload.
        +
        +OCI has max parts limit of 10,000 chunks.
        +
        +Rclone will automatically increase the chunk size when uploading a
        +large file of a known size to stay below this number of chunks limit.
        +
        +
        +Properties:
        +
        +- Config:      max_upload_parts
        +- Env Var:     RCLONE_OOS_MAX_UPLOAD_PARTS
        +- Type:        int
        +- Default:     10000
        +
        +#### --oos-upload-concurrency
        +
        +Concurrency for multipart uploads.
        +
        +This is the number of chunks of the same file that are uploaded
        +concurrently.
        +
        +If you are uploading small numbers of large files over high-speed links
        +and these uploads do not fully utilize your bandwidth, then increasing
        +this may help to speed up the transfers.
        +
        +Properties:
        +
        +- Config:      upload_concurrency
        +- Env Var:     RCLONE_OOS_UPLOAD_CONCURRENCY
        +- Type:        int
        +- Default:     10
        +
        +#### --oos-copy-cutoff
        +
        +Cutoff for switching to multipart copy.
        +
        +Any files larger than this that need to be server-side copied will be
        +copied in chunks of this size.
        +
        +The minimum is 0 and the maximum is 5 GiB.
        +
        +Properties:
        +
        +- Config:      copy_cutoff
        +- Env Var:     RCLONE_OOS_COPY_CUTOFF
        +- Type:        SizeSuffix
        +- Default:     4.656Gi
        +
        +#### --oos-copy-timeout
        +
        +Timeout for copy.
        +
        +Copy is an asynchronous operation, specify timeout to wait for copy to succeed
        +
        +
        +Properties:
        +
        +- Config:      copy_timeout
        +- Env Var:     RCLONE_OOS_COPY_TIMEOUT
        +- Type:        Duration
        +- Default:     1m0s
        +
        +#### --oos-disable-checksum
        +
        +Don't store MD5 checksum with object metadata.
        +
        +Normally rclone will calculate the MD5 checksum of the input before
        +uploading it so it can add it to metadata on the object. This is great
        +for data integrity checking but can cause long delays for large files
        +to start uploading.
        +
        +Properties:
        +
        +- Config:      disable_checksum
        +- Env Var:     RCLONE_OOS_DISABLE_CHECKSUM
        +- Type:        bool
        +- Default:     false
        +
        +#### --oos-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_OOS_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,InvalidUtf8,Dot
        +
        +#### --oos-leave-parts-on-error
        +
        +If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery.
        +
        +It should be set to true for resuming uploads across different sessions.
        +
        +WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add
        +additional costs if not cleaned up.
        +
        +
        +Properties:
        +
        +- Config:      leave_parts_on_error
        +- Env Var:     RCLONE_OOS_LEAVE_PARTS_ON_ERROR
        +- Type:        bool
        +- Default:     false
        +
        +#### --oos-attempt-resume-upload
        +
        +If true attempt to resume previously started multipart upload for the object.
        +This will be helpful to speed up multipart transfers by resuming uploads from past session.
        +
        +WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is 
        +aborted and a new multipart upload is started with the new chunk size.
        +
        +The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully.
        +
        +
        +Properties:
        +
        +- Config:      attempt_resume_upload
        +- Env Var:     RCLONE_OOS_ATTEMPT_RESUME_UPLOAD
        +- Type:        bool
        +- Default:     false
        +
        +#### --oos-no-check-bucket
        +
        +If set, don't attempt to check the bucket exists or create it.
        +
        +This can be useful when trying to minimise the number of transactions
        +rclone does if you know the bucket exists already.
        +
        +It can also be needed if the user you are using does not have bucket
        +creation permissions.
        +
        +
        +Properties:
        +
        +- Config:      no_check_bucket
        +- Env Var:     RCLONE_OOS_NO_CHECK_BUCKET
        +- Type:        bool
        +- Default:     false
        +
        +#### --oos-sse-customer-key-file
        +
        +To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
        +with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
        +
        +Properties:
        +
        +- Config:      sse_customer_key_file
        +- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - None
        +
        +#### --oos-sse-customer-key
        +
        +To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
        +encrypt or  decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is
        +needed. For more information, see Using Your Own Keys for Server-Side Encryption 
        +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
        +
        +Properties:
        +
        +- Config:      sse_customer_key
        +- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - None
        +
        +#### --oos-sse-customer-key-sha256
        +
        +If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
        +key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for 
        +Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
        +
        +Properties:
        +
        +- Config:      sse_customer_key_sha256
        +- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - None
        +
        +#### --oos-sse-kms-key-id
        +
        +if using your own master key in vault, this header specifies the
        +OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call
        +the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key.
        +Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
        +
        +Properties:
        +
        +- Config:      sse_kms_key_id
        +- Env Var:     RCLONE_OOS_SSE_KMS_KEY_ID
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - None
        +
        +#### --oos-sse-customer-algorithm
        +
        +If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm.
        +Object Storage supports "AES256" as the encryption algorithm. For more information, see
        +Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
        +
        +Properties:
        +
        +- Config:      sse_customer_algorithm
        +- Env Var:     RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - None
        +    - "AES256"
        +        - AES256
        +
        +## Backend commands
        +
        +Here are the commands specific to the oracleobjectstorage backend.
        +
        +Run them with
        +
        +    rclone backend COMMAND remote:
        +
        +The help below will explain what arguments each command takes.
        +
        +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
        +info on how to pass options and arguments.
        +
        +These can be run on a running backend using the rc command
        +[backend/command](https://rclone.org/rc/#backend-command).
        +
        +### rename
        +
        +change the name of an object
        +
        +    rclone backend rename remote: [options] [<arguments>+]
        +
        +This command can be used to rename a object.
        +
        +Usage Examples:
        +
        +    rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
        +
        +
        +### list-multipart-uploads
        +
        +List the unfinished multipart uploads
        +
        +    rclone backend list-multipart-uploads remote: [options] [<arguments>+]
        +
        +This command lists the unfinished multipart uploads in JSON format.
        +
        +    rclone backend list-multipart-uploads oos:bucket/path/to/object
        +
        +It returns a dictionary of buckets with values as lists of unfinished
        +multipart uploads.
        +
        +You can call it with no bucket in which case it lists all bucket, with
        +a bucket or with a bucket and path.
        +
        +    {
        +      "test-bucket": [
        +                {
        +                        "namespace": "test-namespace",
        +                        "bucket": "test-bucket",
        +                        "object": "600m.bin",
        +                        "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
        +                        "timeCreated": "2022-07-29T06:21:16.595Z",
        +                        "storageTier": "Standard"
        +                }
        +        ]
        +
        +
        +### cleanup
        +
        +Remove unfinished multipart uploads.
        +
        +    rclone backend cleanup remote: [options] [<arguments>+]
        +
        +This command removes unfinished multipart uploads of age greater than
        +max-age which defaults to 24 hours.
        +
        +Note that you can use --interactive/-i or --dry-run with this command to see what
        +it would do.
        +
        +    rclone backend cleanup oos:bucket/path/to/object
        +    rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
        +
        +Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
         
        -Edit advanced config?
        -y) Yes
        -n) No (default)
        -y/n> n
         
        -Configuration complete.
         Options:
        -- type: oracleobjectstorage
        -- namespace: idbamagbg734
        -- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
        -- region: us-ashburn-1
        -- provider: user_principal_auth
        -- config_file: /etc/oci/dev.conf
        -- config_profile: Test
        -Keep this "remote" remote?
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        See all buckets

        -
        rclone lsd remote:
        -

        Create a new bucket

        -
        rclone mkdir remote:bucket
        -

        List the contents of a bucket

        -
        rclone ls remote:bucket
        -rclone ls remote:bucket --max-depth 1
        -

        OCI Authentication Provider

        -

        OCI has various authentication methods. To learn more about authentication methods please refer oci authentication methods These choices can be specified in the rclone config file.

        -

        Rclone supports the following OCI authentication provider.

        -
        User Principal
        -Instance Principal
        -Resource Principal
        -No authentication
        -

        Authentication provider choice: User Principal

        -

        Sample rclone config file for Authentication Provider User Principal:

        -
        [oos]
        -type = oracleobjectstorage
        -namespace = id<redacted>34
        -compartment = ocid1.compartment.oc1..aa<redacted>ba
        -region = us-ashburn-1
        -provider = user_principal_auth
        -config_file = /home/opc/.oci/config
        -config_profile = Default
        -

        Advantages: - One can use this method from any server within OCI or on-premises or from other cloud provider.

        -

        Considerations: - you need to configure user’s privileges / policy to allow access to object storage - Overhead of managing users and keys. - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.

        -

        Authentication provider choice: Instance Principal

        -

        An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed.

        -

        Sample rclone configuration file for Authentication Provider Instance Principal:

        -
        [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
        -[oos]
        -type = oracleobjectstorage
        -namespace = id<redacted>fn
        -compartment = ocid1.compartment.oc1..aa<redacted>k7a
        -region = us-ashburn-1
        -provider = instance_principal_auth
        -

        Advantages:

        -
          -
        • With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute instances or rotate the credentials.
        • -
        • You don’t need to deal with users and keys.
        • -
        • Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, using kms etc.
        • -
        -

        Considerations:

        -
          -
        • You need to configure a dynamic group having this instance as member and add policy to read object storage to that dynamic group.
        • -
        • Everyone who has access to this machine can execute the CLI commands.
        • -
        • It is applicable for oci compute instances only. It cannot be used on external instance or resources.
        • -
        -

        Authentication provider choice: Resource Principal

        -

        Resource principal auth is very similar to instance principal auth but used for resources that are not compute instances such as serverless functions. To use resource principal ensure Rclone process is started with these environment variables set in its process.

        -
        export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
        -export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
        -export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
        -export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
        -

        Sample rclone configuration file for Authentication Provider Resource Principal:

        -
        [oos]
        -type = oracleobjectstorage
        -namespace = id<redacted>34
        -compartment = ocid1.compartment.oc1..aa<redacted>ba
        -region = us-ashburn-1
        -provider = resource_principal_auth
        -

        Authentication provider choice: No authentication

        -

        Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication:

        -
        [oos]
        -type = oracleobjectstorage
        -namespace = id<redacted>34
        -compartment = ocid1.compartment.oc1..aa<redacted>ba
        -region = us-ashburn-1
        -provider = no_auth
        -

        Options

        -

        Modified time

        -

        The modified time is stored as metadata on the object as opc-meta-mtime as floating point since the epoch, accurate to 1 ns.

        -

        If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.

        -

        Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.

        -

        Multipart uploads

        -

        rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.

        -

        Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

        -

        rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).

        -

        The chunk sizes used in the multipart upload are specified by --oos-chunk-size and the number of chunks uploaded concurrently is specified by --oos-upload-concurrency.

        -

        Multipart uploads will use --transfers * --oos-upload-concurrency * --oos-chunk-size extra memory. Single part uploads to not use extra memory.

        -

        Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.

        -

        Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

        -

        Standard options

        -

        Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

        -

        --oos-provider

        -

        Choose your Auth Provider

        -

        Properties:

        -
          -
        • Config: provider
        • -
        • Env Var: RCLONE_OOS_PROVIDER
        • -
        • Type: string
        • -
        • Default: "env_auth"
        • -
        • Examples: -
            -
          • "env_auth" -
              -
            • automatically pickup the credentials from runtime(env), first one to provide auth wins
            • -
          • -
          • "user_principal_auth" -
              -
            • use an OCI user and an API key for authentication.
            • -
            • you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
            • -
            • https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
            • -
          • -
          • "instance_principal_auth" -
              -
            • use instance principals to authorize an instance to make API calls.
            • -
            • each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
            • -
            • https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
            • -
          • -
          • "resource_principal_auth" -
              -
            • use resource principals to make API calls
            • -
          • -
          • "no_auth" -
              -
            • no credentials needed, this is typically for reading public buckets
            • -
          • -
        • -
        -

        --oos-namespace

        -

        Object storage namespace

        -

        Properties:

        -
          -
        • Config: namespace
        • -
        • Env Var: RCLONE_OOS_NAMESPACE
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --oos-compartment

        -

        Object storage compartment OCID

        -

        Properties:

        -
          -
        • Config: compartment
        • -
        • Env Var: RCLONE_OOS_COMPARTMENT
        • -
        • Provider: !no_auth
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --oos-region

        -

        Object storage Region

        -

        Properties:

        -
          -
        • Config: region
        • -
        • Env Var: RCLONE_OOS_REGION
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --oos-endpoint

        -

        Endpoint for Object storage API.

        -

        Leave blank to use the default endpoint for the region.

        -

        Properties:

        -
          -
        • Config: endpoint
        • -
        • Env Var: RCLONE_OOS_ENDPOINT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --oos-config-file

        -

        Path to OCI config file

        -

        Properties:

        -
          -
        • Config: config_file
        • -
        • Env Var: RCLONE_OOS_CONFIG_FILE
        • -
        • Provider: user_principal_auth
        • -
        • Type: string
        • -
        • Default: "~/.oci/config"
        • -
        • Examples: -
            -
          • "~/.oci/config" -
              -
            • oci configuration file location
            • -
          • -
        • -
        -

        --oos-config-profile

        -

        Profile name inside the oci config file

        -

        Properties:

        -
          -
        • Config: config_profile
        • -
        • Env Var: RCLONE_OOS_CONFIG_PROFILE
        • -
        • Provider: user_principal_auth
        • -
        • Type: string
        • -
        • Default: "Default"
        • -
        • Examples: -
            -
          • "Default" -
              -
            • Use the default profile
            • -
          • -
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

        -

        --oos-storage-tier

        -

        The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm

        -

        Properties:

        -
          -
        • Config: storage_tier
        • -
        • Env Var: RCLONE_OOS_STORAGE_TIER
        • -
        • Type: string
        • -
        • Default: "Standard"
        • -
        • Examples: -
            -
          • "Standard" -
              -
            • Standard storage tier, this is the default tier
            • -
          • -
          • "InfrequentAccess" -
              -
            • InfrequentAccess storage tier
            • -
          • -
          • "Archive" -
              -
            • Archive storage tier
            • -
          • -
        • -
        -

        --oos-upload-cutoff

        -

        Cutoff for switching to chunked upload.

        -

        Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.

        -

        Properties:

        -
          -
        • Config: upload_cutoff
        • -
        • Env Var: RCLONE_OOS_UPLOAD_CUTOFF
        • -
        • Type: SizeSuffix
        • -
        • Default: 200Mi
        • -
        -

        --oos-chunk-size

        -

        Chunk size to use for uploading.

        -

        When uploading files larger than upload_cutoff or files with unknown size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size.

        -

        Note that "upload_concurrency" chunks of this size are buffered in memory per transfer.

        -

        If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.

        -

        Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.

        -

        Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.

        -

        Increasing the chunk size decreases the accuracy of the progress statistics displayed with "-P" flag.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_OOS_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 5Mi
        • -
        -

        --oos-upload-concurrency

        -

        Concurrency for multipart uploads.

        -

        This is the number of chunks of the same file that are uploaded concurrently.

        -

        If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

        -

        Properties:

        -
          -
        • Config: upload_concurrency
        • -
        • Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
        • -
        • Type: int
        • -
        • Default: 10
        • -
        -

        --oos-copy-cutoff

        -

        Cutoff for switching to multipart copy.

        -

        Any files larger than this that need to be server-side copied will be copied in chunks of this size.

        -

        The minimum is 0 and the maximum is 5 GiB.

        -

        Properties:

        -
          -
        • Config: copy_cutoff
        • -
        • Env Var: RCLONE_OOS_COPY_CUTOFF
        • -
        • Type: SizeSuffix
        • -
        • Default: 4.656Gi
        • -
        -

        --oos-copy-timeout

        -

        Timeout for copy.

        -

        Copy is an asynchronous operation, specify timeout to wait for copy to succeed

        -

        Properties:

        -
          -
        • Config: copy_timeout
        • -
        • Env Var: RCLONE_OOS_COPY_TIMEOUT
        • -
        • Type: Duration
        • -
        • Default: 1m0s
        • -
        -

        --oos-disable-checksum

        -

        Don't store MD5 checksum with object metadata.

        -

        Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

        -

        Properties:

        -
          -
        • Config: disable_checksum
        • -
        • Env Var: RCLONE_OOS_DISABLE_CHECKSUM
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --oos-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_OOS_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,InvalidUtf8,Dot
        • -
        -

        --oos-leave-parts-on-error

        -

        If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.

        -

        It should be set to true for resuming uploads across different sessions.

        -

        WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add additional costs if not cleaned up.

        -

        Properties:

        -
          -
        • Config: leave_parts_on_error
        • -
        • Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --oos-no-check-bucket

        -

        If set, don't attempt to check the bucket exists or create it.

        -

        This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.

        -

        It can also be needed if the user you are using does not have bucket creation permissions.

        -

        Properties:

        -
          -
        • Config: no_check_bucket
        • -
        • Env Var: RCLONE_OOS_NO_CHECK_BUCKET
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --oos-sse-customer-key-file

        -

        To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'

        -

        Properties:

        -
          -
        • Config: sse_customer_key_file
        • -
        • Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • None
            • -
          • -
        • -
        -

        --oos-sse-customer-key

        -

        To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)

        -

        Properties:

        -
          -
        • Config: sse_customer_key
        • -
        • Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • None
            • -
          • -
        • -
        -

        --oos-sse-customer-key-sha256

        -

        If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).

        -

        Properties:

        -
          -
        • Config: sse_customer_key_sha256
        • -
        • Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • None
            • -
          • -
        • -
        -

        --oos-sse-kms-key-id

        -

        if using your own master key in vault, this header specifies the OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.

        -

        Properties:

        -
          -
        • Config: sse_kms_key_id
        • -
        • Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • None
            • -
          • -
        • -
        -

        --oos-sse-customer-algorithm

        -

        If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. Object Storage supports "AES256" as the encryption algorithm. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).

        -

        Properties:

        -
          -
        • Config: sse_customer_algorithm
        • -
        • Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • None
            • -
          • -
          • "AES256" -
              -
            • AES256
            • -
          • -
        • -
        -

        Backend commands

        -

        Here are the commands specific to the oracleobjectstorage backend.

        -

        Run them with

        -
        rclone backend COMMAND remote:
        -

        The help below will explain what arguments each command takes.

        -

        See the backend command for more info on how to pass options and arguments.

        -

        These can be run on a running backend using the rc command backend/command.

        -

        rename

        -

        change the name of an object

        -
        rclone backend rename remote: [options] [<arguments>+]
        -

        This command can be used to rename a object.

        -

        Usage Examples:

        -
        rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
        -

        list-multipart-uploads

        -

        List the unfinished multipart uploads

        -
        rclone backend list-multipart-uploads remote: [options] [<arguments>+]
        -

        This command lists the unfinished multipart uploads in JSON format.

        -
        rclone backend list-multipart-uploads oos:bucket/path/to/object
        -

        It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

        -

        You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.

        -
        {
        -  "test-bucket": [
        -            {
        -                    "namespace": "test-namespace",
        -                    "bucket": "test-bucket",
        -                    "object": "600m.bin",
        -                    "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
        -                    "timeCreated": "2022-07-29T06:21:16.595Z",
        -                    "storageTier": "Standard"
        -            }
        -    ]
        -

        cleanup

        -

        Remove unfinished multipart uploads.

        -
        rclone backend cleanup remote: [options] [<arguments>+]
        -

        This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

        -

        Note that you can use --interactive/-i or --dry-run with this command to see what it would do.

        -
        rclone backend cleanup oos:bucket/path/to/object
        -rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
        -

        Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

        -

        Options:

        -
          -
        • "max-age": Max age of upload to delete
        • -
        -

        QingStor

        -

        Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

        -

        Configuration

        -

        Here is an example of making an QingStor configuration. First run

        -
        rclone config
        -

        This will guide you through an interactive setup process.

        -
        No remotes found, make a new one?
        -n) New remote
        -r) Rename remote
        -c) Copy remote
        -s) Set configuration password
        -q) Quit config
        -n/r/c/s/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / QingStor Object Storage
        -   \ "qingstor"
        -[snip]
        -Storage> qingstor
        -Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
        -Choose a number from below, or type in your own value
        - 1 / Enter QingStor credentials in the next step
        -   \ "false"
        - 2 / Get QingStor credentials from the environment (env vars or IAM)
        -   \ "true"
        -env_auth> 1
        -QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
        -access_key_id> access_key
        -QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
        -secret_access_key> secret_key
        +
        +- "max-age": Max age of upload to delete
        +
        +
        +
        +## Tutorials
        +### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/)
        +
        +#  QingStor
        +
        +Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
        +command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
        +
        +## Configuration
        +
        +Here is an example of making an QingStor configuration.  First run
        +
        +    rclone config
        +
        +This will guide you through an interactive setup process.
        +
        +

        No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage  "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step  "false" 2 / Get QingStor credentials from the environment (env vars or IAM)  "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a.  "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a.  "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +This remote is called `remote` and can now be used like this
        +
        +See all buckets
        +
        +    rclone lsd remote:
        +
        +Make a new bucket
        +
        +    rclone mkdir remote:bucket
        +
        +List the contents of a bucket
        +
        +    rclone ls remote:bucket
        +
        +Sync `/home/local/directory` to the remote bucket, deleting any excess
        +files in the bucket.
        +
        +    rclone sync --interactive /home/local/directory remote:bucket
        +
        +### --fast-list
        +
        +This remote supports `--fast-list` which allows you to use fewer
        +transactions in exchange for more memory. See the [rclone
        +docs](https://rclone.org/docs/#fast-list) for more details.
        +
        +### Multipart uploads
        +
        +rclone supports multipart uploads with QingStor which means that it can
        +upload files bigger than 5 GiB. Note that files uploaded with multipart
        +upload don't have an MD5SUM.
        +
        +Note that incomplete multipart uploads older than 24 hours can be
        +removed with `rclone cleanup remote:bucket` just for one bucket
        +`rclone cleanup remote:` for all buckets. QingStor does not ever
        +remove incomplete multipart uploads so it may be necessary to run this
        +from time to time.
        +
        +### Buckets and Zone
        +
        +With QingStor you can list buckets (`rclone lsd`) using any zone,
        +but you can only access the content of a bucket from the zone it was
        +created in.  If you attempt to access a bucket from the wrong zone,
        +you will get an error, `incorrect zone, the bucket is not in 'XXX'
        +zone`.
        +
        +### Authentication
        +
        +There are two ways to supply `rclone` with a set of QingStor
        +credentials. In order of precedence:
        +
        + - Directly in the rclone configuration file (as configured by `rclone config`)
        +   - set `access_key_id` and `secret_access_key`
        + - Runtime configuration:
        +   - set `env_auth` to `true` in the config file
        +   - Exporting the following environment variables before running `rclone`
        +     - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
        +     - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
        +
        +### Restricted filename characters
        +
        +The control characters 0x00-0x1F and / are replaced as in the [default
        +restricted characters set](https://rclone.org/overview/#restricted-characters).  Note
        +that 0x7F is not replaced.
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to qingstor (QingCloud Object Storage).
        +
        +#### --qingstor-env-auth
        +
        +Get QingStor credentials from runtime.
        +
        +Only applies if access_key_id and secret_access_key is blank.
        +
        +Properties:
        +
        +- Config:      env_auth
        +- Env Var:     RCLONE_QINGSTOR_ENV_AUTH
        +- Type:        bool
        +- Default:     false
        +- Examples:
        +    - "false"
        +        - Enter QingStor credentials in the next step.
        +    - "true"
        +        - Get QingStor credentials from the environment (env vars or IAM).
        +
        +#### --qingstor-access-key-id
        +
        +QingStor Access Key ID.
        +
        +Leave blank for anonymous access or runtime credentials.
        +
        +Properties:
        +
        +- Config:      access_key_id
        +- Env Var:     RCLONE_QINGSTOR_ACCESS_KEY_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --qingstor-secret-access-key
        +
        +QingStor Secret Access Key (password).
        +
        +Leave blank for anonymous access or runtime credentials.
        +
        +Properties:
        +
        +- Config:      secret_access_key
        +- Env Var:     RCLONE_QINGSTOR_SECRET_ACCESS_KEY
        +- Type:        string
        +- Required:    false
        +
        +#### --qingstor-endpoint
        +
         Enter an endpoint URL to connection QingStor API.
        -Leave blank will use the default value "https://qingstor.com:443"
        -endpoint>
        -Zone connect to. Default is "pek3a".
        -Choose a number from below, or type in your own value
        -   / The Beijing (China) Three Zone
        - 1 | Needs location constraint pek3a.
        -   \ "pek3a"
        -   / The Shanghai (China) First Zone
        - 2 | Needs location constraint sh1a.
        -   \ "sh1a"
        -zone> 1
        -Number of connection retry.
        -Leave blank will use the default value "3".
        -connection_retries>
        -Remote config
        ---------------------
        -[remote]
        -env_auth = false
        -access_key_id = access_key
        -secret_access_key = secret_key
        -endpoint =
        -zone = pek3a
        -connection_retries =
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        This remote is called remote and can now be used like this

        -

        See all buckets

        -
        rclone lsd remote:
        -

        Make a new bucket

        -
        rclone mkdir remote:bucket
        -

        List the contents of a bucket

        -
        rclone ls remote:bucket
        -

        Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

        -
        rclone sync --interactive /home/local/directory remote:bucket
        -

        --fast-list

        -

        This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

        -

        Multipart uploads

        -

        rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.

        -

        Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.

        -

        Buckets and Zone

        -

        With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.

        -

        Authentication

        -

        There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:

        -
          -
        • Directly in the rclone configuration file (as configured by rclone config) -
            -
          • set access_key_id and secret_access_key
          • -
        • -
        • Runtime configuration: -
            -
          • set env_auth to true in the config file
          • -
          • Exporting the following environment variables before running rclone -
              -
            • Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY
            • -
            • Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY
            • -
          • -
        • -
        -

        Restricted filename characters

        -

        The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Standard options

        -

        Here are the Standard options specific to qingstor (QingCloud Object Storage).

        -

        --qingstor-env-auth

        -

        Get QingStor credentials from runtime.

        -

        Only applies if access_key_id and secret_access_key is blank.

        -

        Properties:

        -
          -
        • Config: env_auth
        • -
        • Env Var: RCLONE_QINGSTOR_ENV_AUTH
        • -
        • Type: bool
        • -
        • Default: false
        • -
        • Examples: -
            -
          • "false" -
              -
            • Enter QingStor credentials in the next step.
            • -
          • -
          • "true" -
              -
            • Get QingStor credentials from the environment (env vars or IAM).
            • -
          • -
        • -
        -

        --qingstor-access-key-id

        -

        QingStor Access Key ID.

        -

        Leave blank for anonymous access or runtime credentials.

        -

        Properties:

        -
          -
        • Config: access_key_id
        • -
        • Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --qingstor-secret-access-key

        -

        QingStor Secret Access Key (password).

        -

        Leave blank for anonymous access or runtime credentials.

        -

        Properties:

        -
          -
        • Config: secret_access_key
        • -
        • Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --qingstor-endpoint

        -

        Enter an endpoint URL to connection QingStor API.

        -

        Leave blank will use the default value "https://qingstor.com:443".

        -

        Properties:

        -
          -
        • Config: endpoint
        • -
        • Env Var: RCLONE_QINGSTOR_ENDPOINT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --qingstor-zone

        -

        Zone to connect to.

        -

        Default is "pek3a".

        -

        Properties:

        -
          -
        • Config: zone
        • -
        • Env Var: RCLONE_QINGSTOR_ZONE
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "pek3a" -
              -
            • The Beijing (China) Three Zone.
            • -
            • Needs location constraint pek3a.
            • -
          • -
          • "sh1a" -
              -
            • The Shanghai (China) First Zone.
            • -
            • Needs location constraint sh1a.
            • -
          • -
          • "gd2a" -
              -
            • The Guangdong (China) Second Zone.
            • -
            • Needs location constraint gd2a.
            • -
          • -
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to qingstor (QingCloud Object Storage).

        -

        --qingstor-connection-retries

        -

        Number of connection retries.

        -

        Properties:

        -
          -
        • Config: connection_retries
        • -
        • Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
        • -
        • Type: int
        • -
        • Default: 3
        • -
        -

        --qingstor-upload-cutoff

        -

        Cutoff for switching to chunked upload.

        -

        Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.

        -

        Properties:

        -
          -
        • Config: upload_cutoff
        • -
        • Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
        • -
        • Type: SizeSuffix
        • -
        • Default: 200Mi
        • -
        -

        --qingstor-chunk-size

        -

        Chunk size to use for uploading.

        -

        When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.

        -

        Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.

        -

        If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 4Mi
        • -
        -

        --qingstor-upload-concurrency

        -

        Concurrency for multipart uploads.

        -

        This is the number of chunks of the same file that are uploaded concurrently.

        -

        NB if you set this to > 1 then the checksums of multipart uploads become corrupted (the uploads themselves are not corrupted though).

        -

        If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

        -

        Properties:

        -
          -
        • Config: upload_concurrency
        • -
        • Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
        • -
        • Type: int
        • -
        • Default: 1
        • -
        -

        --qingstor-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_QINGSTOR_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,Ctl,InvalidUtf8
        • -
        -

        Limitations

        -

        rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

        -

        See List of backends that do not support rclone about and rclone about

        -

        Sia

        -

        Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.

        -

        Introduction

        -

        Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.

        -

        rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

        -

        However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.

        -

        Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.

        -

        Configuration

        -

        Here is an example of how to make a sia remote called mySia. First, run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> mySia
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -...
        -29 / Sia Decentralized Cloud
        -   \ "sia"
        -...
        -Storage> sia
        +
        +Leave blank will use the default value "https://qingstor.com:443".
        +
        +Properties:
        +
        +- Config:      endpoint
        +- Env Var:     RCLONE_QINGSTOR_ENDPOINT
        +- Type:        string
        +- Required:    false
        +
        +#### --qingstor-zone
        +
        +Zone to connect to.
        +
        +Default is "pek3a".
        +
        +Properties:
        +
        +- Config:      zone
        +- Env Var:     RCLONE_QINGSTOR_ZONE
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - "pek3a"
        +        - The Beijing (China) Three Zone.
        +        - Needs location constraint pek3a.
        +    - "sh1a"
        +        - The Shanghai (China) First Zone.
        +        - Needs location constraint sh1a.
        +    - "gd2a"
        +        - The Guangdong (China) Second Zone.
        +        - Needs location constraint gd2a.
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to qingstor (QingCloud Object Storage).
        +
        +#### --qingstor-connection-retries
        +
        +Number of connection retries.
        +
        +Properties:
        +
        +- Config:      connection_retries
        +- Env Var:     RCLONE_QINGSTOR_CONNECTION_RETRIES
        +- Type:        int
        +- Default:     3
        +
        +#### --qingstor-upload-cutoff
        +
        +Cutoff for switching to chunked upload.
        +
        +Any files larger than this will be uploaded in chunks of chunk_size.
        +The minimum is 0 and the maximum is 5 GiB.
        +
        +Properties:
        +
        +- Config:      upload_cutoff
        +- Env Var:     RCLONE_QINGSTOR_UPLOAD_CUTOFF
        +- Type:        SizeSuffix
        +- Default:     200Mi
        +
        +#### --qingstor-chunk-size
        +
        +Chunk size to use for uploading.
        +
        +When uploading files larger than upload_cutoff they will be uploaded
        +as multipart uploads using this chunk size.
        +
        +Note that "--qingstor-upload-concurrency" chunks of this size are buffered
        +in memory per transfer.
        +
        +If you are transferring large files over high-speed links and you have
        +enough memory, then increasing this will speed up the transfers.
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_QINGSTOR_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     4Mi
        +
        +#### --qingstor-upload-concurrency
        +
        +Concurrency for multipart uploads.
        +
        +This is the number of chunks of the same file that are uploaded
        +concurrently.
        +
        +NB if you set this to > 1 then the checksums of multipart uploads
        +become corrupted (the uploads themselves are not corrupted though).
        +
        +If you are uploading small numbers of large files over high-speed links
        +and these uploads do not fully utilize your bandwidth, then increasing
        +this may help to speed up the transfers.
        +
        +Properties:
        +
        +- Config:      upload_concurrency
        +- Env Var:     RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
        +- Type:        int
        +- Default:     1
        +
        +#### --qingstor-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_QINGSTOR_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,Ctl,InvalidUtf8
        +
        +
        +
        +## Limitations
        +
        +`rclone about` is not supported by the qingstor backend. Backends without
        +this capability cannot determine free space for an rclone mount or
        +use policy `mfs` (most free space) as a member of an rclone union
        +remote.
        +
        +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
        +
        +#  Quatrix
        +
        +Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business).
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g., `remote:directory/subdirectory`.
        +
        +The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https://<account>/profile/api-keys`
        +or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
        +
        +See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
        +
        +## Configuration
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Quatrix by Maytech  "quatrix" [snip] Storage> quatrix API key for accessing Quatrix account. api_key> your_api_key Host name of Quatrix account. host> example.quatrix.it

        + +++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        [remote] api_key = your_api_key host = example.quatrix.it
        y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ```
        Once configured you can then use rclone like this,
        List directories in top level of your Quatrix
        rclone lsd remote:
        List all the files in your Quatrix
        rclone ls remote:
        To copy a local directory to an Quatrix directory called backup
        rclone copy /home/source remote:backup
        ### API key validity
        API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed.
        ``` $ rclone config Current remotes:
        Name Type ==== ==== remote quatrix
        e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote
        +

        [remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key -------------------- Edit remote Option api_key. API key for accessing Quatrix account Enter a string value. Press Enter for the default (your_api_key) api_key> Option host. Host name of Quatrix account Enter a string value. Press Enter for the default (some_host.quatrix.it).

        + +++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        [remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key
        y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ```
        ### Modified time and hashes
        Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not.
        Quatrix does not support hashes, so you cannot use the --checksum flag.
        ### Restricted filename characters
        File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to . or .. nor contain / , \ or non-printable ascii.
        ### Transfers
        For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.
        ### Deleting files
        Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
        ### Standard options
        Here are the Standard options specific to quatrix (Quatrix by Maytech).
        #### --quatrix-api-key
        API key for accessing Quatrix account
        Properties:
        - Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - Required: true
        #### --quatrix-host
        Host name of Quatrix account
        Properties:
        - Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: true
        ### Advanced options
        Here are the Advanced options specific to quatrix (Quatrix by Maytech).
        #### --quatrix-encoding
        The encoding for the backend.
        See the encoding section in the overview for more info.
        Properties:
        - Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
        #### --quatrix-effective-upload-time
        Wanted upload time for one chunk
        Properties:
        - Config: effective_upload_time - Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: "4s"
        #### --quatrix-minimal-chunk-size
        The minimal size for one chunk
        Properties:
        - Config: minimal_chunk_size - Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi
        #### --quatrix-maximal-summary-chunk-size
        The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size'
        Properties:
        - Config: maximal_summary_chunk_size - Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: 95.367Mi
        #### --quatrix-hard-delete
        Delete files permanently rather than putting them into the trash.
        Properties:
        - Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool - Default: false
        ## Storage usage
        The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota.
        ## Server-side operations
        Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation.
        # Sia
        Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.
        ## Introduction
        Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.
        rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).
        However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.
        Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.
        ## Configuration
        Here is an example of how to make a sia remote called mySia. First, run:
        rclone config
        This will guide you through an interactive setup process:
        ``` No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> mySia Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value ... 29 / Sia Decentralized Cloud  "sia" ... Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. Enter a string value. Press Enter for the default ("http://127.0.0.1:9980"). api_url> http://127.0.0.1:9980 Sia Daemon API Password. Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n
        +

        [mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +Once configured, you can then use `rclone` like this:
        +
        +- List directories in top level of your Sia storage
        +
        +

        rclone lsd mySia:

        +
        
        +- List all the files in your Sia storage
        +
        +

        rclone ls mySia:

        +
        
        +- Upload a local directory to the Sia directory called _backup_
        +
        +

        rclone copy /home/source mySia:backup

        +
        
        +
        +### Standard options
        +
        +Here are the Standard options specific to sia (Sia Decentralized Cloud).
        +
        +#### --sia-api-url
        +
         Sia daemon API URL, like http://sia.daemon.host:9980.
        +
         Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
         Keep default if Sia daemon runs on localhost.
        -Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
        -api_url> http://127.0.0.1:9980
        +
        +Properties:
        +
        +- Config:      api_url
        +- Env Var:     RCLONE_SIA_API_URL
        +- Type:        string
        +- Default:     "http://127.0.0.1:9980"
        +
        +#### --sia-api-password
        +
         Sia Daemon API Password.
        +
         Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank (default)
        -y/g/n> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        -Edit advanced config?
        -y) Yes
        -n) No (default)
        -y/n> n
        ---------------------
        -[mySia]
        -type = sia
        -api_url = http://127.0.0.1:9980
        -api_password = *** ENCRYPTED ***
        ---------------------
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        Once configured, you can then use rclone like this:

        -
          -
        • List directories in top level of your Sia storage
        • -
        -
        rclone lsd mySia:
        -
          -
        • List all the files in your Sia storage
        • -
        -
        rclone ls mySia:
        -
          -
        • Upload a local directory to the Sia directory called backup
        • -
        -
        rclone copy /home/source mySia:backup
        -

        Standard options

        -

        Here are the Standard options specific to sia (Sia Decentralized Cloud).

        -

        --sia-api-url

        -

        Sia daemon API URL, like http://sia.daemon.host:9980.

        -

        Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.

        -

        Properties:

        -
          -
        • Config: api_url
        • -
        • Env Var: RCLONE_SIA_API_URL
        • -
        • Type: string
        • -
        • Default: "http://127.0.0.1:9980"
        • -
        -

        --sia-api-password

        -

        Sia Daemon API Password.

        -

        Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: api_password
        • -
        • Env Var: RCLONE_SIA_API_PASSWORD
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to sia (Sia Decentralized Cloud).

        -

        --sia-user-agent

        -

        Siad User Agent

        -

        Sia daemon requires the 'Sia-Agent' user agent by default for security

        -

        Properties:

        -
          -
        • Config: user_agent
        • -
        • Env Var: RCLONE_SIA_USER_AGENT
        • -
        • Type: string
        • -
        • Default: "Sia-Agent"
        • -
        -

        --sia-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_SIA_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
        • -
        -

        Limitations

        -
          -
        • Modification times not supported
        • -
        • Checksums not supported
        • -
        • rclone about not supported
        • -
        • rclone can work only with Siad or Sia-UI at the moment, the SkyNet daemon is not supported yet.
        • -
        • Sia does not allow control characters or symbols like question and pound signs in file names. rclone will transparently encode them for you, but you'd better be aware
        • -
        -

        Swift

        -

        Swift refers to OpenStack Object Storage. Commercial implementations of that being:

        - -

        Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

        -

        Configuration

        -

        Here is an example of making a swift configuration. First run

        -
        rclone config
        -

        This will guide you through an interactive setup process.

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
        -   \ "swift"
        -[snip]
        -Storage> swift
        -Get swift credentials from environment variables in standard OpenStack form.
        -Choose a number from below, or type in your own value
        - 1 / Enter swift credentials in the next step
        -   \ "false"
        - 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
        -   \ "true"
        -env_auth> true
        -User name to log in (OS_USERNAME).
        -user> 
        -API key or password (OS_PASSWORD).
        -key> 
        -Authentication URL for server (OS_AUTH_URL).
        -Choose a number from below, or type in your own value
        - 1 / Rackspace US
        -   \ "https://auth.api.rackspacecloud.com/v1.0"
        - 2 / Rackspace UK
        -   \ "https://lon.auth.api.rackspacecloud.com/v1.0"
        - 3 / Rackspace v2
        -   \ "https://identity.api.rackspacecloud.com/v2.0"
        - 4 / Memset Memstore UK
        -   \ "https://auth.storage.memset.com/v1.0"
        - 5 / Memset Memstore UK v2
        -   \ "https://auth.storage.memset.com/v2.0"
        - 6 / OVH
        -   \ "https://auth.cloud.ovh.net/v3"
        - 7  / Blomp Cloud Storage
        -   \ "https://authenticate.ain.net"
        -auth> 
        -User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
        -user_id> 
        -User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
        -domain> 
        -Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
        -tenant> 
        -Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
        -tenant_id> 
        -Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
        -tenant_domain> 
        -Region name - optional (OS_REGION_NAME)
        -region> 
        -Storage URL - optional (OS_STORAGE_URL)
        -storage_url> 
        -Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
        -auth_token> 
        -AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
        -auth_version> 
        -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
        -Choose a number from below, or type in your own value
        - 1 / Public (default, choose this if not sure)
        -   \ "public"
        - 2 / Internal (use internal service net)
        -   \ "internal"
        - 3 / Admin
        -   \ "admin"
        -endpoint_type> 
        -Remote config
        ---------------------
        -[test]
        -env_auth = true
        -user = 
        -key = 
        -auth = 
        -user_id = 
        -domain = 
        -tenant = 
        -tenant_id = 
        -tenant_domain = 
        -region = 
        -storage_url = 
        -auth_token = 
        -auth_version = 
        -endpoint_type = 
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        This remote is called remote and can now be used like this

        -

        See all containers

        -
        rclone lsd remote:
        -

        Make a new container

        -
        rclone mkdir remote:container
        -

        List the contents of a container

        -
        rclone ls remote:container
        -

        Sync /home/local/directory to the remote container, deleting any excess files in the container.

        -
        rclone sync --interactive /home/local/directory remote:container
        -

        Configuration from an OpenStack credentials file

        -

        An OpenStack credentials file typically looks something something like this (without the comments)

        -
        export OS_AUTH_URL=https://a.provider.net/v2.0
        -export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
        -export OS_TENANT_NAME="1234567890123456"
        -export OS_USERNAME="123abc567xy"
        -echo "Please enter your OpenStack Password: "
        -read -sr OS_PASSWORD_INPUT
        -export OS_PASSWORD=$OS_PASSWORD_INPUT
        -export OS_REGION_NAME="SBG1"
        -if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
        -

        The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

        -
        [remote]
        -type = swift
        -user = $OS_USERNAME
        -key = $OS_PASSWORD
        -auth = $OS_AUTH_URL
        -tenant = $OS_TENANT_NAME
        -

        Note that you may (or may not) need to set region too - try without first.

        -

        Configuration from the environment

        -

        If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.

        -

        When you run through the config, make sure you choose true for env_auth and leave everything else blank.

        -

        rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.

        -

        Using an alternate authentication method

        -

        If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.

        -

        Using rclone without a config file

        -

        You can use rclone with swift without a config file, if desired, like this:

        -
        source openstack-credentials-file
        -export RCLONE_CONFIG_MYREMOTE_TYPE=swift
        -export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
        -rclone lsd myremote:
        -

        --fast-list

        -

        This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

        -

        --update and --use-server-modtime

        -

        As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

        -

        For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

        -

        Modified time

        -

        The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

        -

        This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

        -

        Restricted filename characters

        - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        NUL0x00
        /0x2F
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Standard options

        -

        Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

        -

        --swift-env-auth

        -

        Get swift credentials from environment variables in standard OpenStack form.

        -

        Properties:

        -
          -
        • Config: env_auth
        • -
        • Env Var: RCLONE_SWIFT_ENV_AUTH
        • -
        • Type: bool
        • -
        • Default: false
        • -
        • Examples: -
            -
          • "false" -
              -
            • Enter swift credentials in the next step.
            • -
          • -
          • "true" -
              -
            • Get swift credentials from environment vars.
            • -
            • Leave other fields blank if using this.
            • -
          • -
        • -
        -

        --swift-user

        -

        User name to log in (OS_USERNAME).

        -

        Properties:

        -
          -
        • Config: user
        • -
        • Env Var: RCLONE_SWIFT_USER
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-key

        -

        API key or password (OS_PASSWORD).

        -

        Properties:

        -
          -
        • Config: key
        • -
        • Env Var: RCLONE_SWIFT_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-auth

        -

        Authentication URL for server (OS_AUTH_URL).

        -

        Properties:

        -
          -
        • Config: auth
        • -
        • Env Var: RCLONE_SWIFT_AUTH
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "https://auth.api.rackspacecloud.com/v1.0" -
              -
            • Rackspace US
            • -
          • -
          • "https://lon.auth.api.rackspacecloud.com/v1.0" -
              -
            • Rackspace UK
            • -
          • -
          • "https://identity.api.rackspacecloud.com/v2.0" -
              -
            • Rackspace v2
            • -
          • -
          • "https://auth.storage.memset.com/v1.0" -
              -
            • Memset Memstore UK
            • -
          • -
          • "https://auth.storage.memset.com/v2.0" -
              -
            • Memset Memstore UK v2
            • -
          • -
          • "https://auth.cloud.ovh.net/v3" -
              -
            • OVH
            • -
          • -
          • "https://authenticate.ain.net" -
              -
            • Blomp Cloud Storage
            • -
          • -
        • -
        -

        --swift-user-id

        -

        User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).

        -

        Properties:

        -
          -
        • Config: user_id
        • -
        • Env Var: RCLONE_SWIFT_USER_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-domain

        -

        User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)

        -

        Properties:

        -
          -
        • Config: domain
        • -
        • Env Var: RCLONE_SWIFT_DOMAIN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-tenant

        -

        Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).

        -

        Properties:

        -
          -
        • Config: tenant
        • -
        • Env Var: RCLONE_SWIFT_TENANT
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-tenant-id

        -

        Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).

        -

        Properties:

        -
          -
        • Config: tenant_id
        • -
        • Env Var: RCLONE_SWIFT_TENANT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-tenant-domain

        -

        Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).

        -

        Properties:

        -
          -
        • Config: tenant_domain
        • -
        • Env Var: RCLONE_SWIFT_TENANT_DOMAIN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-region

        -

        Region name - optional (OS_REGION_NAME).

        -

        Properties:

        -
          -
        • Config: region
        • -
        • Env Var: RCLONE_SWIFT_REGION
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-storage-url

        -

        Storage URL - optional (OS_STORAGE_URL).

        -

        Properties:

        -
          -
        • Config: storage_url
        • -
        • Env Var: RCLONE_SWIFT_STORAGE_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-auth-token

        -

        Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).

        -

        Properties:

        -
          -
        • Config: auth_token
        • -
        • Env Var: RCLONE_SWIFT_AUTH_TOKEN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-application-credential-id

        -

        Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).

        -

        Properties:

        -
          -
        • Config: application_credential_id
        • -
        • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-application-credential-name

        -

        Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).

        -

        Properties:

        -
          -
        • Config: application_credential_name
        • -
        • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-application-credential-secret

        -

        Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).

        -

        Properties:

        -
          -
        • Config: application_credential_secret
        • -
        • Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --swift-auth-version

        -

        AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).

        -

        Properties:

        -
          -
        • Config: auth_version
        • -
        • Env Var: RCLONE_SWIFT_AUTH_VERSION
        • -
        • Type: int
        • -
        • Default: 0
        • -
        -

        --swift-endpoint-type

        -

        Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).

        -

        Properties:

        -
          -
        • Config: endpoint_type
        • -
        • Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
        • -
        • Type: string
        • -
        • Default: "public"
        • -
        • Examples: -
            -
          • "public" -
              -
            • Public (default, choose this if not sure)
            • -
          • -
          • "internal" -
              -
            • Internal (use internal service net)
            • -
          • -
          • "admin" -
              -
            • Admin
            • -
          • -
        • -
        -

        --swift-storage-policy

        -

        The storage policy to use when creating a new container.

        -

        This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.

        -

        Properties:

        -
          -
        • Config: storage_policy
        • -
        • Env Var: RCLONE_SWIFT_STORAGE_POLICY
        • -
        • Type: string
        • -
        • Required: false
        • -
        • Examples: -
            -
          • "" -
              -
            • Default
            • -
          • -
          • "pcs" -
              -
            • OVH Public Cloud Storage
            • -
          • -
          • "pca" -
              -
            • OVH Public Cloud Archive
            • -
          • -
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

        -

        --swift-leave-parts-on-error

        -

        If true avoid calling abort upload on a failure.

        -

        It should be set to true for resuming uploads across different sessions.

        -

        Properties:

        -
          -
        • Config: leave_parts_on_error
        • -
        • Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --swift-chunk-size

        -

        Above this size files will be chunked into a _segments container.

        -

        Above this size files will be chunked into a _segments container. The default for this is 5 GiB which is its maximum value.

        -

        Properties:

        -
          -
        • Config: chunk_size
        • -
        • Env Var: RCLONE_SWIFT_CHUNK_SIZE
        • -
        • Type: SizeSuffix
        • -
        • Default: 5Gi
        • -
        -

        --swift-no-chunk

        -

        Don't chunk files during streaming upload.

        -

        When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

        -

        This will limit the maximum upload size to 5 GiB. However non chunked files are easier to deal with and have an MD5SUM.

        -

        Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

        -

        Properties:

        -
          -
        • Config: no_chunk
        • -
        • Env Var: RCLONE_SWIFT_NO_CHUNK
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --swift-no-large-objects

        -

        Disable support for static and dynamic large objects

        -

        Swift cannot transparently store files bigger than 5 GiB. There are two schemes for doing that, static or dynamic large objects, and the API does not allow rclone to determine whether a file is a static or dynamic large object without doing a HEAD on the object. Since these need to be treated differently, this means rclone has to issue HEAD requests for objects for example when reading checksums.

        -

        When no_large_objects is set, rclone will assume that there are no static or dynamic large objects stored. This means it can stop doing the extra HEAD calls which in turn increases performance greatly especially when doing a swift to swift transfer with --checksum set.

        -

        Setting this option implies no_chunk and also that no files will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload.

        -

        If you set this option and there are static or dynamic large objects, then this will give incorrect hashes for them. Downloads will succeed, but other operations such as Remove and Copy will fail.

        -

        Properties:

        -
          -
        • Config: no_large_objects
        • -
        • Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --swift-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_SWIFT_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,InvalidUtf8
        • -
        -

        Limitations

        -

        The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

        -

        Troubleshooting

        -

        Rclone gives Failed to create file system for "remote:": Bad Request

        -

        Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

        -

        So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

        -

        This may also be caused by specifying the region when you shouldn't have (e.g. OVH).

        -

        Rclone gives Failed to create file system: Response didn't have storage url and auth token

        -

        This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

        -

        OVH Cloud Archive

        -

        To use rclone with OVH cloud archive, first use rclone config to set up a swift backend with OVH, choosing pca as the storage_policy.

        -

        Uploading Objects

        -

        Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.

        -

        Retrieving Objects

        -

        To retrieve objects use rclone copy as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:

        -

        2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

        -

        Rclone will wait for the time specified then retry the copy.

        -

        pCloud

        -

        Paths are specified as remote:path

        -

        Paths may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Pcloud
        -   \ "pcloud"
        -[snip]
        -Storage> pcloud
        -Pcloud App Client Id - leave blank normally.
        -client_id> 
        -Pcloud App Client Secret - leave blank normally.
        -client_secret> 
        -Remote config
        -Use web browser to automatically authenticate rclone with remote?
        - * Say Y if the machine running rclone has a web browser you can use
        - * Say N if running rclone on a (remote) machine without web browser access
        -If not sure try Y. If Y failed, try N.
        -y) Yes
        -n) No
        -y/n> y
        -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
        -Log in and authorize rclone for access
        -Waiting for code...
        -Got code
        ---------------------
        -[remote]
        -client_id = 
        -client_secret = 
        -token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        See the remote setup docs for how to set it up on a machine with no Internet browser available.

        -

        Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

        -

        Once configured you can then use rclone like this,

        -

        List directories in top level of your pCloud

        -
        rclone lsd remote:
        -

        List all the files in your pCloud

        -
        rclone ls remote:
        -

        To copy a local directory to a pCloud directory called backup

        -
        rclone copy /home/source remote:backup
        -

        Modified time and hashes

        -

        pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

        -

        pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum flag.

        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        \0x5C
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Deleting files

        -

        Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

        -

        Emptying the trash

        -

        Due to an API limitation, the rclone cleanup command will only work if you set your username and password in the advanced options for this backend. Since we generally want to avoid storing user passwords in the rclone config file, we advise you to only set this up if you need the rclone cleanup command to work.

        -

        Root folder ID

        -

        You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive.

        -

        Normally you will leave this blank and rclone will determine the correct root to use itself.

        -

        However you can set this to restrict rclone to a specific folder hierarchy.

        -

        In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

        -

        So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

        -

        Standard options

        -

        Here are the Standard options specific to pcloud (Pcloud).

        -

        --pcloud-client-id

        -

        OAuth Client Id.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_id
        • -
        • Env Var: RCLONE_PCLOUD_CLIENT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pcloud-client-secret

        -

        OAuth Client Secret.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_secret
        • -
        • Env Var: RCLONE_PCLOUD_CLIENT_SECRET
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to pcloud (Pcloud).

        -

        --pcloud-token

        -

        OAuth Access Token as a JSON blob.

        -

        Properties:

        -
          -
        • Config: token
        • -
        • Env Var: RCLONE_PCLOUD_TOKEN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pcloud-auth-url

        -

        Auth server URL.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: auth_url
        • -
        • Env Var: RCLONE_PCLOUD_AUTH_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pcloud-token-url

        -

        Token server url.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: token_url
        • -
        • Env Var: RCLONE_PCLOUD_TOKEN_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pcloud-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_PCLOUD_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
        • -
        -

        --pcloud-root-folder-id

        -

        Fill in for rclone to use a non root folder as its starting point.

        -

        Properties:

        -
          -
        • Config: root_folder_id
        • -
        • Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID
        • -
        • Type: string
        • -
        • Default: "d0"
        • -
        -

        --pcloud-hostname

        -

        Hostname to connect to.

        -

        This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize.

        -

        Properties:

        -
          -
        • Config: hostname
        • -
        • Env Var: RCLONE_PCLOUD_HOSTNAME
        • -
        • Type: string
        • -
        • Default: "api.pcloud.com"
        • -
        • Examples: -
            -
          • "api.pcloud.com" -
              -
            • Original/US region
            • -
          • -
          • "eapi.pcloud.com" -
              -
            • EU region
            • -
          • -
        • -
        -

        --pcloud-username

        -

        Your pcloud username.

        -

        This is only required when you want to use the cleanup command. Due to a bug in the pcloud API the required API does not support OAuth authentication so we have to rely on user password authentication for it.

        -

        Properties:

        -
          -
        • Config: username
        • -
        • Env Var: RCLONE_PCLOUD_USERNAME
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pcloud-password

        -

        Your pcloud password.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: password
        • -
        • Env Var: RCLONE_PCLOUD_PASSWORD
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        PikPak

        -

        PikPak is a private cloud drive.

        -

        Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        Here is an example of making a remote for PikPak.

        -

        First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
         
        -Enter name for new remote.
        -name> remote
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
         
        -Option Storage.
        -Type of storage to configure.
        -Choose a number from below, or type in your own value.
        -XX / PikPak
        -   \ (pikpak)
        -Storage> XX
        +Properties:
         
        -Option user.
        -Pikpak username.
        -Enter a value.
        -user> USERNAME
        +- Config:      api_password
        +- Env Var:     RCLONE_SIA_API_PASSWORD
        +- Type:        string
        +- Required:    false
         
        -Option pass.
        -Pikpak password.
        -Choose an alternative below.
        -y) Yes, type in my own password
        -g) Generate random password
        -y/g> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        +### Advanced options
         
        -Edit advanced config?
        -y) Yes
        -n) No (default)
        -y/n> 
        +Here are the Advanced options specific to sia (Sia Decentralized Cloud).
         
        -Configuration complete.
        -Options:
        -- type: pikpak
        -- user: USERNAME
        -- pass: *** ENCRYPTED ***
        -- token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"}
        -Keep this "remote" remote?
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        Standard options

        -

        Here are the Standard options specific to pikpak (PikPak).

        -

        --pikpak-user

        -

        Pikpak username.

        -

        Properties:

        -
          -
        • Config: user
        • -
        • Env Var: RCLONE_PIKPAK_USER
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --pikpak-pass

        -

        Pikpak password.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: pass
        • -
        • Env Var: RCLONE_PIKPAK_PASS
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to pikpak (PikPak).

        -

        --pikpak-client-id

        -

        OAuth Client Id.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_id
        • -
        • Env Var: RCLONE_PIKPAK_CLIENT_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-client-secret

        -

        OAuth Client Secret.

        -

        Leave blank normally.

        -

        Properties:

        -
          -
        • Config: client_secret
        • -
        • Env Var: RCLONE_PIKPAK_CLIENT_SECRET
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-token

        -

        OAuth Access Token as a JSON blob.

        -

        Properties:

        -
          -
        • Config: token
        • -
        • Env Var: RCLONE_PIKPAK_TOKEN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-auth-url

        -

        Auth server URL.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: auth_url
        • -
        • Env Var: RCLONE_PIKPAK_AUTH_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-token-url

        -

        Token server url.

        -

        Leave blank to use the provider defaults.

        -

        Properties:

        -
          -
        • Config: token_url
        • -
        • Env Var: RCLONE_PIKPAK_TOKEN_URL
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-root-folder-id

        -

        ID of the root folder. Leave blank normally.

        -

        Fill in for rclone to use a non root folder as its starting point.

        -

        Properties:

        -
          -
        • Config: root_folder_id
        • -
        • Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --pikpak-use-trash

        -

        Send files to the trash instead of deleting permanently.

        -

        Defaults to true, namely sending files to the trash. Use --pikpak-use-trash=false to delete files permanently instead.

        -

        Properties:

        -
          -
        • Config: use_trash
        • -
        • Env Var: RCLONE_PIKPAK_USE_TRASH
        • -
        • Type: bool
        • -
        • Default: true
        • -
        -

        --pikpak-trashed-only

        -

        Only show files that are in the trash.

        -

        This will show trashed files in their original directory structure.

        -

        Properties:

        -
          -
        • Config: trashed_only
        • -
        • Env Var: RCLONE_PIKPAK_TRASHED_ONLY
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --pikpak-hash-memory-limit

        -

        Files bigger than this will be cached on disk to calculate hash if required.

        -

        Properties:

        -
          -
        • Config: hash_memory_limit
        • -
        • Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT
        • -
        • Type: SizeSuffix
        • -
        • Default: 10Mi
        • -
        -

        --pikpak-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_PIKPAK_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
        • -
        -

        Backend commands

        -

        Here are the commands specific to the pikpak backend.

        -

        Run them with

        -
        rclone backend COMMAND remote:
        -

        The help below will explain what arguments each command takes.

        -

        See the backend command for more info on how to pass options and arguments.

        -

        These can be run on a running backend using the rc command backend/command.

        -

        addurl

        -

        Add offline download task for url

        -
        rclone backend addurl remote: [options] [<arguments>+]
        -

        This command adds offline download task for url.

        -

        Usage:

        -
        rclone backend addurl pikpak:dirpath url
        -

        Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder.

        -

        decompress

        -

        Request decompress of a file/files in a folder

        -
        rclone backend decompress remote: [options] [<arguments>+]
        -

        This command requests decompress of file/files in a folder.

        -

        Usage:

        -
        rclone backend decompress pikpak:dirpath {filename} -o password=password
        -rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
        -

        An optional argument 'filename' can be specified for a file located in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.

        -

        Result:

        -
        {
        -    "Decompressed": 17,
        -    "SourceDeleted": 0,
        -    "Errors": 0
        -}
        -

        Limitations

        -

        Hashes

        -

        PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.

        -

        Deleted files

        -

        Deleted files will still be visible with --pikpak-trashed-only even after the trash emptied. This goes away after few days.

        -

        premiumize.me

        -

        Paths are specified as remote:path

        -

        Paths may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> remote
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / premiumize.me
        -   \ "premiumizeme"
        -[snip]
        -Storage> premiumizeme
        -** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
        +#### --sia-user-agent
         
        -Remote config
        -Use web browser to automatically authenticate rclone with remote?
        - * Say Y if the machine running rclone has a web browser you can use
        - * Say N if running rclone on a (remote) machine without web browser access
        -If not sure try Y. If Y failed, try N.
        -y) Yes
        -n) No
        -y/n> y
        -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
        -Log in and authorize rclone for access
        -Waiting for code...
        -Got code
        ---------------------
        -[remote]
        -type = premiumizeme
        -token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> 
        -

        See the remote setup docs for how to set it up on a machine with no Internet browser available.

        -

        Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

        -

        Once configured you can then use rclone like this,

        -

        List directories in top level of your premiumize.me

        -
        rclone lsd remote:
        -

        List all the files in your premiumize.me

        -
        rclone ls remote:
        -

        To copy a local directory to an premiumize.me directory called backup

        -
        rclone copy /home/source remote:backup
        -

        Modified time and hashes

        -

        premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.

        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        \0x5C
        "0x22
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Standard options

        -

        Here are the Standard options specific to premiumizeme (premiumize.me).

        -

        --premiumizeme-api-key

        -

        API Key.

        -

        This is not normally used - use oauth instead.

        -

        Properties:

        -
          -
        • Config: api_key
        • -
        • Env Var: RCLONE_PREMIUMIZEME_API_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to premiumizeme (premiumize.me).

        -

        --premiumizeme-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_PREMIUMIZEME_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
        • -
        -

        Limitations

        -

        Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

        -

        premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

        -

        premiumize.me only supports filenames up to 255 characters in length.

        -

        put.io

        -

        Paths are specified as remote:path

        -

        put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

        -

        Configuration

        -

        The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

        -

        Here is an example of how to make a remote called remote. First run:

        -
         rclone config
        -

        This will guide you through an interactive setup process:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> putio
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Put.io
        -   \ "putio"
        -[snip]
        -Storage> putio
        -** See help for putio backend at: https://rclone.org/putio/ **
        +Siad User Agent
         
        -Remote config
        -Use web browser to automatically authenticate rclone with remote?
        - * Say Y if the machine running rclone has a web browser you can use
        - * Say N if running rclone on a (remote) machine without web browser access
        -If not sure try Y. If Y failed, try N.
        -y) Yes
        -n) No
        -y/n> y
        -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
        -Log in and authorize rclone for access
        -Waiting for code...
        -Got code
        ---------------------
        -[putio]
        -type = putio
        -token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -Current remotes:
        +Sia daemon requires the 'Sia-Agent' user agent by default for security
         
        -Name                 Type
        -====                 ====
        -putio                putio
        +Properties:
         
        -e) Edit existing remote
        -n) New remote
        -d) Delete remote
        -r) Rename remote
        -c) Copy remote
        -s) Set configuration password
        -q) Quit config
        -e/n/d/r/c/s/q> q
        -

        See the remote setup docs for how to set it up on a machine with no Internet browser available.

        -

        Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

        -

        You can then use it like this,

        -

        List directories in top level of your put.io

        -
        rclone lsd remote:
        -

        List all the files in your put.io

        -
        rclone ls remote:
        -

        To copy a local directory to a put.io directory called backup

        -
        rclone copy /home/source remote:backup
        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        \0x5C
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Advanced options

        -

        Here are the Advanced options specific to putio (Put.io).

        -

        --putio-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_PUTIO_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
        • -
        -

        Limitations

        -

        put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.

        -

        If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.

        -

        Seafile

        -

        This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported

        -

        Configuration

        -

        There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

        -

        Configuration in root mode

        -

        Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

        -
        rclone config
        -

        This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> seafile
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Seafile
        -   \ "seafile"
        -[snip]
        -Storage> seafile
        -** See help for seafile backend at: https://rclone.org/seafile/ **
        +- Config:      user_agent
        +- Env Var:     RCLONE_SIA_USER_AGENT
        +- Type:        string
        +- Default:     "Sia-Agent"
         
        -URL of seafile host to connect to
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        - 1 / Connect to cloud.seafile.com
        -   \ "https://cloud.seafile.com/"
        -url> http://my.seafile.server/
        -User name (usually email address)
        -Enter a string value. Press Enter for the default ("").
        -user> me@example.com
        -Password
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank (default)
        -y/g> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        -Two-factor authentication ('true' if the account has 2FA enabled)
        -Enter a boolean value (true or false). Press Enter for the default ("false").
        -2fa> false
        -Name of the library. Leave blank to access all non-encrypted libraries.
        -Enter a string value. Press Enter for the default ("").
        -library>
        -Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank (default)
        -y/g/n> n
        -Edit advanced config? (y/n)
        -y) Yes
        -n) No (default)
        -y/n> n
        -Remote config
        -Two-factor authentication is not enabled on this account.
        ---------------------
        -[seafile]
        -type = seafile
        -url = http://my.seafile.server/
        -user = me@example.com
        -pass = *** ENCRYPTED ***
        -2fa = false
        ---------------------
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this:

        -

        See all libraries

        -
        rclone lsd seafile:
        -

        Create a new library

        -
        rclone mkdir seafile:library
        -

        List the contents of a library

        -
        rclone ls seafile:library
        -

        Sync /home/local/directory to the remote library, deleting any excess files in the library.

        -
        rclone sync --interactive /home/local/directory seafile:library
        -

        Configuration in library mode

        -

        Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> seafile
        -Type of storage to configure.
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / Seafile
        -   \ "seafile"
        -[snip]
        -Storage> seafile
        -** See help for seafile backend at: https://rclone.org/seafile/ **
        +#### --sia-encoding
         
        -URL of seafile host to connect to
        -Enter a string value. Press Enter for the default ("").
        -Choose a number from below, or type in your own value
        - 1 / Connect to cloud.seafile.com
        -   \ "https://cloud.seafile.com/"
        -url> http://my.seafile.server/
        -User name (usually email address)
        -Enter a string value. Press Enter for the default ("").
        -user> me@example.com
        -Password
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank (default)
        -y/g> y
        -Enter the password:
        -password:
        -Confirm the password:
        -password:
        -Two-factor authentication ('true' if the account has 2FA enabled)
        -Enter a boolean value (true or false). Press Enter for the default ("false").
        -2fa> true
        -Name of the library. Leave blank to access all non-encrypted libraries.
        -Enter a string value. Press Enter for the default ("").
        -library> My Library
        -Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank (default)
        -y/g/n> n
        -Edit advanced config? (y/n)
        -y) Yes
        -n) No (default)
        -y/n> n
        -Remote config
        -Two-factor authentication: please enter your 2FA code
        -2fa code> 123456
        -Authenticating...
        -Success!
        ---------------------
        -[seafile]
        -type = seafile
        -url = http://my.seafile.server/
        -user = me@example.com
        -pass = 
        -2fa = true
        -library = My Library
        ---------------------
        -y) Yes this is OK (default)
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.

        -

        You specified My Library during the configuration. The root of the remote is pointing at the root of the library My Library:

        -

        See all files in the library:

        -
        rclone lsd seafile:
        -

        Create a new directory inside the library

        -
        rclone mkdir seafile:directory
        -

        List the contents of a directory

        -
        rclone ls seafile:directory
        -

        Sync /home/local/directory to the remote library, deleting any excess files in the library.

        -
        rclone sync --interactive /home/local/directory seafile:
        -

        --fast-list

        -

        Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x

        -

        Restricted filename characters

        -

        In addition to the default restricted characters set the following characters are also replaced:

        - - - - - - - - - - - - - - - - - - - - - - - - - -
        CharacterValueReplacement
        /0x2F
        "0x22
        \0x5C
        -

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        - -

        Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:

        -
        rclone link seafile:seafile-tutorial.doc
        -http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_SIA_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
        +
        +
        +
        +## Limitations
        +
        +- Modification times not supported
        +- Checksums not supported
        +- `rclone about` not supported
        +- rclone can work only with _Siad_ or _Sia-UI_ at the moment,
        +  the **SkyNet daemon is not supported yet.**
        +- Sia does not allow control characters or symbols like question and pound
        +  signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding)
        +  them for you, but you'd better be aware
        +
        +#  Swift
        +
        +Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
        +Commercial implementations of that being:
        +
        +  * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
        +  * [Memset Memstore](https://www.memset.com/cloud/storage/)
        +  * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/)
        +  * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
        +  * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/)
        +  * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
        +
        +Paths are specified as `remote:container` (or `remote:` for the `lsd`
        +command.)  You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
        +
        +## Configuration
        +
        +Here is an example of making a swift configuration.  First run
        +
        +    rclone config
        +
        +This will guide you through an interactive setup process.
         
        -

        or if run on a directory you will get:

        -
        rclone link seafile:dir
        -http://my.seafile.server/d/9ea2455f6f55478bbb0d/
        -

        Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.

        -

        Compatibility

        -

        It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition

        -

        Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

        -

        Each new version of rclone is automatically tested against the latest docker image of the seafile community server.

        -

        Standard options

        -

        Here are the Standard options specific to seafile (seafile).

        -

        --seafile-url

        -

        URL of seafile host to connect to.

        -

        Properties:

        -
          -
        • Config: url
        • -
        • Env Var: RCLONE_SEAFILE_URL
        • -
        • Type: string
        • -
        • Required: true
        • -
        • Examples: -
            -
          • "https://cloud.seafile.com/" -
              -
            • Connect to cloud.seafile.com.
            • -
          • -
        • -
        -

        --seafile-user

        -

        User name (usually email address).

        -

        Properties:

        -
          -
        • Config: user
        • -
        • Env Var: RCLONE_SEAFILE_USER
        • -
        • Type: string
        • -
        • Required: true
        • -
        -

        --seafile-pass

        -

        Password.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: pass
        • -
        • Env Var: RCLONE_SEAFILE_PASS
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --seafile-2fa

        -

        Two-factor authentication ('true' if the account has 2FA enabled).

        -

        Properties:

        -
          -
        • Config: 2fa
        • -
        • Env Var: RCLONE_SEAFILE_2FA
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --seafile-library

        -

        Name of the library.

        -

        Leave blank to access all non-encrypted libraries.

        -

        Properties:

        -
          -
        • Config: library
        • -
        • Env Var: RCLONE_SEAFILE_LIBRARY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --seafile-library-key

        -

        Library password (for encrypted libraries only).

        -

        Leave blank if you pass it through the command line.

        -

        NB Input to this must be obscured - see rclone obscure.

        -

        Properties:

        -
          -
        • Config: library_key
        • -
        • Env Var: RCLONE_SEAFILE_LIBRARY_KEY
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        --seafile-auth-token

        -

        Authentication token.

        -

        Properties:

        -
          -
        • Config: auth_token
        • -
        • Env Var: RCLONE_SEAFILE_AUTH_TOKEN
        • -
        • Type: string
        • -
        • Required: false
        • -
        -

        Advanced options

        -

        Here are the Advanced options specific to seafile (seafile).

        -

        --seafile-create-library

        -

        Should rclone create a library if it doesn't exist.

        -

        Properties:

        -
          -
        • Config: create_library
        • -
        • Env Var: RCLONE_SEAFILE_CREATE_LIBRARY
        • -
        • Type: bool
        • -
        • Default: false
        • -
        -

        --seafile-encoding

        -

        The encoding for the backend.

        -

        See the encoding section in the overview for more info.

        -

        Properties:

        -
          -
        • Config: encoding
        • -
        • Env Var: RCLONE_SEAFILE_ENCODING
        • -
        • Type: MultiEncoder
        • -
        • Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
        • -
        -

        SFTP

        -

        SFTP is the Secure (or SSH) File Transfer Protocol.

        -

        The SFTP backend can be used with a number of different providers:

        -
          -
        • Hetzner Storage Box
        • -
        • rsync.net
        • -
        -

        SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

        -

        Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

        -

        Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.

        -

        Note that by default rclone will try to execute shell commands on the server, see shell access considerations.

        -

        Configuration

        -

        Here is an example of making an SFTP configuration. First run

        -
        rclone config
        -

        This will guide you through an interactive setup process.

        -
        No remotes found, make a new one?
        -n) New remote
        -s) Set configuration password
        -q) Quit config
        -n/s/q> n
        -name> remote
        -Type of storage to configure.
        -Choose a number from below, or type in your own value
        -[snip]
        -XX / SSH/SFTP
        -   \ "sftp"
        -[snip]
        -Storage> sftp
        -SSH host to connect to
        -Choose a number from below, or type in your own value
        - 1 / Connect to example.com
        -   \ "example.com"
        -host> example.com
        -SSH username
        -Enter a string value. Press Enter for the default ("$USER").
        -user> sftpuser
        -SSH port number
        -Enter a signed integer. Press Enter for the default (22).
        -port>
        -SSH password, leave blank to use ssh-agent.
        -y) Yes type in my own password
        -g) Generate random password
        -n) No leave this optional password blank
        -y/g/n> n
        -Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
        -key_file>
        -Remote config
        ---------------------
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)  "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step  "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this.  "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US  "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK  "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2  "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK  "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2  "https://auth.storage.memset.com/v2.0" 6 / OVH  "https://auth.cloud.ovh.net/v3" 7 / Blomp Cloud Storage  "https://authenticate.ain.net" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure)  "public" 2 / Internal (use internal service net)  "internal" 3 / Admin  "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +This remote is called `remote` and can now be used like this
        +
        +See all containers
        +
        +    rclone lsd remote:
        +
        +Make a new container
        +
        +    rclone mkdir remote:container
        +
        +List the contents of a container
        +
        +    rclone ls remote:container
        +
        +Sync `/home/local/directory` to the remote container, deleting any
        +excess files in the container.
        +
        +    rclone sync --interactive /home/local/directory remote:container
        +
        +### Configuration from an OpenStack credentials file
        +
        +An OpenStack credentials file typically looks something something
        +like this (without the comments)
        +
        +

        export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

        +
        
        +The config file needs to look something like this where `$OS_USERNAME`
        +represents the value of the `OS_USERNAME` variable - `123abc567xy` in
        +the example above.
        +
        +

        [remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME

        +
        
        +Note that you may (or may not) need to set `region` too - try without first.
        +
        +### Configuration from the environment
        +
        +If you prefer you can configure rclone to use swift using a standard
        +set of OpenStack environment variables.
        +
        +When you run through the config, make sure you choose `true` for
        +`env_auth` and leave everything else blank.
        +
        +rclone will then set any empty config parameters from the environment
        +using standard OpenStack environment variables.  There is [a list of
        +the
        +variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
        +in the docs for the swift library.
        +
        +### Using an alternate authentication method
        +
        +If your OpenStack installation uses a non-standard authentication method
        +that might not be yet supported by rclone or the underlying swift library, 
        +you can authenticate externally (e.g. calling manually the `openstack` 
        +commands to get a token). Then, you just need to pass the two 
        +configuration variables ``auth_token`` and ``storage_url``. 
        +If they are both provided, the other variables are ignored. rclone will 
        +not try to authenticate but instead assume it is already authenticated 
        +and use these two variables to access the OpenStack installation.
        +
        +#### Using rclone without a config file
        +
        +You can use rclone with swift without a config file, if desired, like
        +this:
        +
        +

        source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:

        +
        
        +### --fast-list
        +
        +This remote supports `--fast-list` which allows you to use fewer
        +transactions in exchange for more memory. See the [rclone
        +docs](https://rclone.org/docs/#fast-list) for more details.
        +
        +### --update and --use-server-modtime
        +
        +As noted below, the modified time is stored on metadata on the object. It is
        +used by default for all operations that require checking the time a file was
        +last updated. It allows rclone to treat the remote more like a true filesystem,
        +but it is inefficient because it requires an extra API call to retrieve the
        +metadata.
        +
        +For many operations, the time the object was last uploaded to the remote is
        +sufficient to determine if it is "dirty". By using `--update` along with
        +`--use-server-modtime`, you can avoid the extra API call and simply upload
        +files whose local modtime is newer than the time it was last uploaded.
        +
        +### Modified time
        +
        +The modified time is stored as metadata on the object as
        +`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
        +ns.
        +
        +This is a de facto standard (used in the official python-swiftclient
        +amongst others) for storing the modification time for an object.
        +
        +### Restricted filename characters
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| NUL       | 0x00  | ␀           |
        +| /         | 0x2F  | /          |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
        +
        +#### --swift-env-auth
        +
        +Get swift credentials from environment variables in standard OpenStack form.
        +
        +Properties:
        +
        +- Config:      env_auth
        +- Env Var:     RCLONE_SWIFT_ENV_AUTH
        +- Type:        bool
        +- Default:     false
        +- Examples:
        +    - "false"
        +        - Enter swift credentials in the next step.
        +    - "true"
        +        - Get swift credentials from environment vars.
        +        - Leave other fields blank if using this.
        +
        +#### --swift-user
        +
        +User name to log in (OS_USERNAME).
        +
        +Properties:
        +
        +- Config:      user
        +- Env Var:     RCLONE_SWIFT_USER
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-key
        +
        +API key or password (OS_PASSWORD).
        +
        +Properties:
        +
        +- Config:      key
        +- Env Var:     RCLONE_SWIFT_KEY
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-auth
        +
        +Authentication URL for server (OS_AUTH_URL).
        +
        +Properties:
        +
        +- Config:      auth
        +- Env Var:     RCLONE_SWIFT_AUTH
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - "https://auth.api.rackspacecloud.com/v1.0"
        +        - Rackspace US
        +    - "https://lon.auth.api.rackspacecloud.com/v1.0"
        +        - Rackspace UK
        +    - "https://identity.api.rackspacecloud.com/v2.0"
        +        - Rackspace v2
        +    - "https://auth.storage.memset.com/v1.0"
        +        - Memset Memstore UK
        +    - "https://auth.storage.memset.com/v2.0"
        +        - Memset Memstore UK v2
        +    - "https://auth.cloud.ovh.net/v3"
        +        - OVH
        +    - "https://authenticate.ain.net"
        +        - Blomp Cloud Storage
        +
        +#### --swift-user-id
        +
        +User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
        +
        +Properties:
        +
        +- Config:      user_id
        +- Env Var:     RCLONE_SWIFT_USER_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-domain
        +
        +User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
        +
        +Properties:
        +
        +- Config:      domain
        +- Env Var:     RCLONE_SWIFT_DOMAIN
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-tenant
        +
        +Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
        +
        +Properties:
        +
        +- Config:      tenant
        +- Env Var:     RCLONE_SWIFT_TENANT
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-tenant-id
        +
        +Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
        +
        +Properties:
        +
        +- Config:      tenant_id
        +- Env Var:     RCLONE_SWIFT_TENANT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-tenant-domain
        +
        +Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
        +
        +Properties:
        +
        +- Config:      tenant_domain
        +- Env Var:     RCLONE_SWIFT_TENANT_DOMAIN
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-region
        +
        +Region name - optional (OS_REGION_NAME).
        +
        +Properties:
        +
        +- Config:      region
        +- Env Var:     RCLONE_SWIFT_REGION
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-storage-url
        +
        +Storage URL - optional (OS_STORAGE_URL).
        +
        +Properties:
        +
        +- Config:      storage_url
        +- Env Var:     RCLONE_SWIFT_STORAGE_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-auth-token
        +
        +Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
        +
        +Properties:
        +
        +- Config:      auth_token
        +- Env Var:     RCLONE_SWIFT_AUTH_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-application-credential-id
        +
        +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
        +
        +Properties:
        +
        +- Config:      application_credential_id
        +- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-application-credential-name
        +
        +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
        +
        +Properties:
        +
        +- Config:      application_credential_name
        +- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-application-credential-secret
        +
        +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
        +
        +Properties:
        +
        +- Config:      application_credential_secret
        +- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
        +- Type:        string
        +- Required:    false
        +
        +#### --swift-auth-version
        +
        +AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
        +
        +Properties:
        +
        +- Config:      auth_version
        +- Env Var:     RCLONE_SWIFT_AUTH_VERSION
        +- Type:        int
        +- Default:     0
        +
        +#### --swift-endpoint-type
        +
        +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
        +
        +Properties:
        +
        +- Config:      endpoint_type
        +- Env Var:     RCLONE_SWIFT_ENDPOINT_TYPE
        +- Type:        string
        +- Default:     "public"
        +- Examples:
        +    - "public"
        +        - Public (default, choose this if not sure)
        +    - "internal"
        +        - Internal (use internal service net)
        +    - "admin"
        +        - Admin
        +
        +#### --swift-storage-policy
        +
        +The storage policy to use when creating a new container.
        +
        +This applies the specified storage policy when creating a new
        +container. The policy cannot be changed afterwards. The allowed
        +configuration values and their meaning depend on your Swift storage
        +provider.
        +
        +Properties:
        +
        +- Config:      storage_policy
        +- Env Var:     RCLONE_SWIFT_STORAGE_POLICY
        +- Type:        string
        +- Required:    false
        +- Examples:
        +    - ""
        +        - Default
        +    - "pcs"
        +        - OVH Public Cloud Storage
        +    - "pca"
        +        - OVH Public Cloud Archive
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
        +
        +#### --swift-leave-parts-on-error
        +
        +If true avoid calling abort upload on a failure.
        +
        +It should be set to true for resuming uploads across different sessions.
        +
        +Properties:
        +
        +- Config:      leave_parts_on_error
        +- Env Var:     RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
        +- Type:        bool
        +- Default:     false
        +
        +#### --swift-chunk-size
        +
        +Above this size files will be chunked into a _segments container.
        +
        +Above this size files will be chunked into a _segments container.  The
        +default for this is 5 GiB which is its maximum value.
        +
        +Properties:
        +
        +- Config:      chunk_size
        +- Env Var:     RCLONE_SWIFT_CHUNK_SIZE
        +- Type:        SizeSuffix
        +- Default:     5Gi
        +
        +#### --swift-no-chunk
        +
        +Don't chunk files during streaming upload.
        +
        +When doing streaming uploads (e.g. using rcat or mount) setting this
        +flag will cause the swift backend to not upload chunked files.
        +
        +This will limit the maximum upload size to 5 GiB. However non chunked
        +files are easier to deal with and have an MD5SUM.
        +
        +Rclone will still chunk files bigger than chunk_size when doing normal
        +copy operations.
        +
        +Properties:
        +
        +- Config:      no_chunk
        +- Env Var:     RCLONE_SWIFT_NO_CHUNK
        +- Type:        bool
        +- Default:     false
        +
        +#### --swift-no-large-objects
        +
        +Disable support for static and dynamic large objects
        +
        +Swift cannot transparently store files bigger than 5 GiB. There are
        +two schemes for doing that, static or dynamic large objects, and the
        +API does not allow rclone to determine whether a file is a static or
        +dynamic large object without doing a HEAD on the object. Since these
        +need to be treated differently, this means rclone has to issue HEAD
        +requests for objects for example when reading checksums.
        +
        +When `no_large_objects` is set, rclone will assume that there are no
        +static or dynamic large objects stored. This means it can stop doing
        +the extra HEAD calls which in turn increases performance greatly
        +especially when doing a swift to swift transfer with `--checksum` set.
        +
        +Setting this option implies `no_chunk` and also that no files will be
        +uploaded in chunks, so files bigger than 5 GiB will just fail on
        +upload.
        +
        +If you set this option and there *are* static or dynamic large objects,
        +then this will give incorrect hashes for them. Downloads will succeed,
        +but other operations such as Remove and Copy will fail.
        +
        +
        +Properties:
        +
        +- Config:      no_large_objects
        +- Env Var:     RCLONE_SWIFT_NO_LARGE_OBJECTS
        +- Type:        bool
        +- Default:     false
        +
        +#### --swift-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_SWIFT_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,InvalidUtf8
        +
        +
        +
        +## Limitations
        +
        +The Swift API doesn't return a correct MD5SUM for segmented files
        +(Dynamic or Static Large Objects) so rclone won't check or use the
        +MD5SUM for these.
        +
        +## Troubleshooting
        +
        +### Rclone gives Failed to create file system for "remote:": Bad Request
        +
        +Due to an oddity of the underlying swift library, it gives a "Bad
        +Request" error rather than a more sensible error when the
        +authentication fails for Swift.
        +
        +So this most likely means your username / password is wrong.  You can
        +investigate further with the `--dump-bodies` flag.
        +
        +This may also be caused by specifying the region when you shouldn't
        +have (e.g. OVH).
        +
        +### Rclone gives Failed to create file system: Response didn't have storage url and auth token
        +
        +This is most likely caused by forgetting to specify your tenant when
        +setting up a swift remote.
        +
        +## OVH Cloud Archive
        +
        +To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`.
        +
        +### Uploading Objects
        +
        +Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
        +
        +### Retrieving Objects
        +
        +To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
        +
        +`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)`
        +
        +Rclone will wait for the time specified then retry the copy.
        +
        +#  pCloud
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +The initial setup for pCloud involves getting a token from pCloud which you
        +need to do in your browser.  `rclone config` walks you through it.
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud  "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
        +machine with no Internet browser available.
        +
        +Note that rclone runs a webserver on your local machine to collect the
        +token as returned from pCloud. This only runs from the moment it opens
        +your browser to the moment you get back the verification code.  This
        +is on `http://127.0.0.1:53682/` and this it may require you to unblock
        +it temporarily if you are running a host firewall.
        +
        +Once configured you can then use `rclone` like this,
        +
        +List directories in top level of your pCloud
        +
        +    rclone lsd remote:
        +
        +List all the files in your pCloud
        +
        +    rclone ls remote:
        +
        +To copy a local directory to a pCloud directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Modified time and hashes ###
        +
        +pCloud allows modification times to be set on objects accurate to 1
        +second.  These will be used to detect whether objects need syncing or
        +not.  In order to set a Modification time pCloud requires the object
        +be re-uploaded.
        +
        +pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256
        +hashes in the EU region, so you can use the `--checksum` flag.
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| \         | 0x5C  | \          |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +### Deleting files
        +
        +Deleted files will be moved to the trash.  Your subscription level
        +will determine how long items stay in the trash.  `rclone cleanup` can
        +be used to empty the trash.
        +
        +### Emptying the trash
        +
        +Due to an API limitation, the `rclone cleanup` command will only work if you 
        +set your username and password in the advanced options for this backend. 
        +Since we generally want to avoid storing user passwords in the rclone config
        +file, we advise you to only set this up if you need the `rclone cleanup` command to work.
        +
        +### Root folder ID
        +
        +You can set the `root_folder_id` for rclone.  This is the directory
        +(identified by its `Folder ID`) that rclone considers to be the root
        +of your pCloud drive.
        +
        +Normally you will leave this blank and rclone will determine the
        +correct root to use itself.
        +
        +However you can set this to restrict rclone to a specific folder
        +hierarchy.
        +
        +In order to do this you will have to find the `Folder ID` of the
        +directory you wish rclone to display.  This will be the `folder` field
        +of the URL when you open the relevant folder in the pCloud web
        +interface.
        +
        +So if the folder you want rclone to use has a URL which looks like
        +`https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid`
        +in the browser, then you use `5xxxxxxxx8` as
        +the `root_folder_id` in the config.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to pcloud (Pcloud).
        +
        +#### --pcloud-client-id
        +
        +OAuth Client Id.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_id
        +- Env Var:     RCLONE_PCLOUD_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --pcloud-client-secret
        +
        +OAuth Client Secret.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_PCLOUD_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to pcloud (Pcloud).
        +
        +#### --pcloud-token
        +
        +OAuth Access Token as a JSON blob.
        +
        +Properties:
        +
        +- Config:      token
        +- Env Var:     RCLONE_PCLOUD_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --pcloud-auth-url
        +
        +Auth server URL.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      auth_url
        +- Env Var:     RCLONE_PCLOUD_AUTH_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --pcloud-token-url
        +
        +Token server url.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      token_url
        +- Env Var:     RCLONE_PCLOUD_TOKEN_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --pcloud-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PCLOUD_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
        +
        +#### --pcloud-root-folder-id
        +
        +Fill in for rclone to use a non root folder as its starting point.
        +
        +Properties:
        +
        +- Config:      root_folder_id
        +- Env Var:     RCLONE_PCLOUD_ROOT_FOLDER_ID
        +- Type:        string
        +- Default:     "d0"
        +
        +#### --pcloud-hostname
        +
        +Hostname to connect to.
        +
        +This is normally set when rclone initially does the oauth connection,
        +however you will need to set it by hand if you are using remote config
        +with rclone authorize.
        +
        +
        +Properties:
        +
        +- Config:      hostname
        +- Env Var:     RCLONE_PCLOUD_HOSTNAME
        +- Type:        string
        +- Default:     "api.pcloud.com"
        +- Examples:
        +    - "api.pcloud.com"
        +        - Original/US region
        +    - "eapi.pcloud.com"
        +        - EU region
        +
        +#### --pcloud-username
        +
        +Your pcloud username.
        +            
        +This is only required when you want to use the cleanup command. Due to a bug
        +in the pcloud API the required API does not support OAuth authentication so
        +we have to rely on user password authentication for it.
        +
        +Properties:
        +
        +- Config:      username
        +- Env Var:     RCLONE_PCLOUD_USERNAME
        +- Type:        string
        +- Required:    false
        +
        +#### --pcloud-password
        +
        +Your pcloud password.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      password
        +- Env Var:     RCLONE_PCLOUD_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +
        +
        +#  PikPak
        +
        +PikPak is [a private cloud drive](https://mypikpak.com/).
        +
        +Paths are specified as `remote:path`, and may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +Here is an example of making a remote for PikPak.
        +
        +First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n

        +

        Enter name for new remote. name> remote

        +

        Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. XX / PikPak  (pikpak) Storage> XX

        +

        Option user. Pikpak username. Enter a value. user> USERNAME

        +

        Option pass. Pikpak password. Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password:

        +

        Edit advanced config? y) Yes n) No (default) y/n>

        +

        Configuration complete. Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +
        +### Standard options
        +
        +Here are the Standard options specific to pikpak (PikPak).
        +
        +#### --pikpak-user
        +
        +Pikpak username.
        +
        +Properties:
        +
        +- Config:      user
        +- Env Var:     RCLONE_PIKPAK_USER
        +- Type:        string
        +- Required:    true
        +
        +#### --pikpak-pass
        +
        +Pikpak password.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      pass
        +- Env Var:     RCLONE_PIKPAK_PASS
        +- Type:        string
        +- Required:    true
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to pikpak (PikPak).
        +
        +#### --pikpak-client-id
        +
        +OAuth Client Id.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_id
        +- Env Var:     RCLONE_PIKPAK_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-client-secret
        +
        +OAuth Client Secret.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_PIKPAK_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-token
        +
        +OAuth Access Token as a JSON blob.
        +
        +Properties:
        +
        +- Config:      token
        +- Env Var:     RCLONE_PIKPAK_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-auth-url
        +
        +Auth server URL.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      auth_url
        +- Env Var:     RCLONE_PIKPAK_AUTH_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-token-url
        +
        +Token server url.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      token_url
        +- Env Var:     RCLONE_PIKPAK_TOKEN_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-root-folder-id
        +
        +ID of the root folder.
        +Leave blank normally.
        +
        +Fill in for rclone to use a non root folder as its starting point.
        +
        +
        +Properties:
        +
        +- Config:      root_folder_id
        +- Env Var:     RCLONE_PIKPAK_ROOT_FOLDER_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --pikpak-use-trash
        +
        +Send files to the trash instead of deleting permanently.
        +
        +Defaults to true, namely sending files to the trash.
        +Use `--pikpak-use-trash=false` to delete files permanently instead.
        +
        +Properties:
        +
        +- Config:      use_trash
        +- Env Var:     RCLONE_PIKPAK_USE_TRASH
        +- Type:        bool
        +- Default:     true
        +
        +#### --pikpak-trashed-only
        +
        +Only show files that are in the trash.
        +
        +This will show trashed files in their original directory structure.
        +
        +Properties:
        +
        +- Config:      trashed_only
        +- Env Var:     RCLONE_PIKPAK_TRASHED_ONLY
        +- Type:        bool
        +- Default:     false
        +
        +#### --pikpak-hash-memory-limit
        +
        +Files bigger than this will be cached on disk to calculate hash if required.
        +
        +Properties:
        +
        +- Config:      hash_memory_limit
        +- Env Var:     RCLONE_PIKPAK_HASH_MEMORY_LIMIT
        +- Type:        SizeSuffix
        +- Default:     10Mi
        +
        +#### --pikpak-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PIKPAK_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
        +
        +## Backend commands
        +
        +Here are the commands specific to the pikpak backend.
        +
        +Run them with
        +
        +    rclone backend COMMAND remote:
        +
        +The help below will explain what arguments each command takes.
        +
        +See the [backend](https://rclone.org/commands/rclone_backend/) command for more
        +info on how to pass options and arguments.
        +
        +These can be run on a running backend using the rc command
        +[backend/command](https://rclone.org/rc/#backend-command).
        +
        +### addurl
        +
        +Add offline download task for url
        +
        +    rclone backend addurl remote: [options] [<arguments>+]
        +
        +This command adds offline download task for url.
        +
        +Usage:
        +
        +    rclone backend addurl pikpak:dirpath url
        +
        +Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, 
        +download will fallback to default 'My Pack' folder.
        +
        +
        +### decompress
        +
        +Request decompress of a file/files in a folder
        +
        +    rclone backend decompress remote: [options] [<arguments>+]
        +
        +This command requests decompress of file/files in a folder.
        +
        +Usage:
        +
        +    rclone backend decompress pikpak:dirpath {filename} -o password=password
        +    rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
        +
        +An optional argument 'filename' can be specified for a file located in 
        +'pikpak:dirpath'. You may want to pass '-o password=password' for a 
        +password-protected files. Also, pass '-o delete-src-file' to delete 
        +source files after decompression finished.
        +
        +Result:
        +
        +    {
        +        "Decompressed": 17,
        +        "SourceDeleted": 0,
        +        "Errors": 0
        +    }
        +
        +
        +
        +
        +## Limitations ##
        +
        +### Hashes ###
        +
        +PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
        +
        +### Deleted files ###
        +
        +Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
        +
        +#  premiumize.me
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
        +need to do in your browser.  `rclone config` walks you through it.
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me  "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **

        +

        Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d>

        +
        
        +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
        +machine with no Internet browser available.
        +
        +Note that rclone runs a webserver on your local machine to collect the
        +token as returned from premiumize.me. This only runs from the moment it opens
        +your browser to the moment you get back the verification code.  This
        +is on `http://127.0.0.1:53682/` and this it may require you to unblock
        +it temporarily if you are running a host firewall.
        +
        +Once configured you can then use `rclone` like this,
        +
        +List directories in top level of your premiumize.me
        +
        +    rclone lsd remote:
        +
        +List all the files in your premiumize.me
        +
        +    rclone ls remote:
        +
        +To copy a local directory to an premiumize.me directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Modified time and hashes
        +
        +premiumize.me does not support modification times or hashes, therefore
        +syncing will default to `--size-only` checking.  Note that using
        +`--update` will work.
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| \         | 0x5C  | \           |
        +| "         | 0x22  | "           |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to premiumizeme (premiumize.me).
        +
        +#### --premiumizeme-client-id
        +
        +OAuth Client Id.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_id
        +- Env Var:     RCLONE_PREMIUMIZEME_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --premiumizeme-client-secret
        +
        +OAuth Client Secret.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_PREMIUMIZEME_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +#### --premiumizeme-api-key
        +
        +API Key.
        +
        +This is not normally used - use oauth instead.
        +
        +
        +Properties:
        +
        +- Config:      api_key
        +- Env Var:     RCLONE_PREMIUMIZEME_API_KEY
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to premiumizeme (premiumize.me).
        +
        +#### --premiumizeme-token
        +
        +OAuth Access Token as a JSON blob.
        +
        +Properties:
        +
        +- Config:      token
        +- Env Var:     RCLONE_PREMIUMIZEME_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --premiumizeme-auth-url
        +
        +Auth server URL.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      auth_url
        +- Env Var:     RCLONE_PREMIUMIZEME_AUTH_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --premiumizeme-token-url
        +
        +Token server url.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      token_url
        +- Env Var:     RCLONE_PREMIUMIZEME_TOKEN_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --premiumizeme-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PREMIUMIZEME_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
        +
        +
        +
        +## Limitations
        +
        +Note that premiumize.me is case insensitive so you can't have a file called
        +"Hello.doc" and one called "hello.doc".
        +
        +premiumize.me file names can't have the `\` or `"` characters in.
        +rclone maps these to and from an identical looking unicode equivalents
        +`\` and `"`
        +
        +premiumize.me only supports filenames up to 255 characters in length.
        +
        +#  Proton Drive
        +
        +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
        + for your files that protects your data.
        +
        +This is an rclone backend for Proton Drive which supports the file transfer
        +features of Proton Drive using the same client-side encryption.
        +
        +Due to the fact that Proton Drive doesn't publish its API documentation, this 
        +backend is implemented with best efforts by reading the open-sourced client 
        +source code and observing the Proton Drive traffic in the browser.
        +
        +**NB** This backend is currently in Beta. It is believed to be correct
        +and all the integration tests pass. However the Proton Drive protocol
        +has evolved over time there may be accounts it is not compatible
        +with. Please [post on the rclone forum](https://forum.rclone.org/) if
        +you find an incompatibility.
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configurations
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +**NOTE:** The Proton Drive encryption keys need to have been already generated 
        +after a regular login via the browser, otherwise attempting to use the 
        +credentials in `rclone` will fail.
        +
        +Once configured you can then use `rclone` like this,
        +
        +List directories in top level of your Proton Drive
        +
        +    rclone lsd remote:
        +
        +List all the files in your Proton Drive
        +
        +    rclone ls remote:
        +
        +To copy a local directory to an Proton Drive directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Modified time
        +
        +Proton Drive Bridge does not support updating modification times yet.
        +
        +### Restricted filename characters
        +
        +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and 
        +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
        +
        +### Duplicated files
        +
        +Proton Drive can not have two files with exactly the same name and path. If the 
        +conflict occurs, depending on the advanced config, the file might or might not 
        +be overwritten.
        +
        +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
        +
        +Please set your mailbox password in the advanced config section.
        +
        +### Caching
        +
        +The cache is currently built for the case when the rclone is the only instance 
        +performing operations to the mount point. The event system, which is the proton
        +API system that provides visibility of what has changed on the drive, is yet 
        +to be implemented, so updates from other clients won’t be reflected in the 
        +cache. Thus, if there are concurrent clients accessing the same mount point, 
        +then we might have a problem with caching the stale data.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to protondrive (Proton Drive).
        +
        +#### --protondrive-username
        +
        +The username of your proton account
        +
        +Properties:
        +
        +- Config:      username
        +- Env Var:     RCLONE_PROTONDRIVE_USERNAME
        +- Type:        string
        +- Required:    true
        +
        +#### --protondrive-password
        +
        +The password of your proton account.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      password
        +- Env Var:     RCLONE_PROTONDRIVE_PASSWORD
        +- Type:        string
        +- Required:    true
        +
        +#### --protondrive-2fa
        +
        +The 2FA code
        +
        +The value can also be provided with --protondrive-2fa=000000
        +
        +The 2FA code of your proton drive account if the account is set up with 
        +two-factor authentication
        +
        +Properties:
        +
        +- Config:      2fa
        +- Env Var:     RCLONE_PROTONDRIVE_2FA
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to protondrive (Proton Drive).
        +
        +#### --protondrive-mailbox-password
        +
        +The mailbox password of your two-password proton account.
        +
        +For more information regarding the mailbox password, please check the 
        +following official knowledge base article: 
        +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
        +
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      mailbox_password
        +- Env Var:     RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-uid
        +
        +Client uid key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_uid
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_UID
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-access-token
        +
        +Client access token key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_access_token
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-refresh-token
        +
        +Client refresh token key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_refresh_token
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-salted-key-pass
        +
        +Client salted key pass key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_salted_key_pass
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PROTONDRIVE_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
        +
        +#### --protondrive-original-file-size
        +
        +Return the file size before encryption
        +            
        +The size of the encrypted file will be different from (bigger than) the 
        +original file size. Unless there is a reason to return the file size 
        +after encryption is performed, otherwise, set this option to true, as 
        +features like Open() which will need to be supplied with original content 
        +size, will fail to operate properly
        +
        +Properties:
        +
        +- Config:      original_file_size
        +- Env Var:     RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
        +- Type:        bool
        +- Default:     true
        +
        +#### --protondrive-app-version
        +
        +The app version string 
        +
        +The app version string indicates the client that is currently performing 
        +the API request. This information is required and will be sent with every 
        +API request.
        +
        +Properties:
        +
        +- Config:      app_version
        +- Env Var:     RCLONE_PROTONDRIVE_APP_VERSION
        +- Type:        string
        +- Default:     "macos-drive@1.0.0-alpha.1+rclone"
        +
        +#### --protondrive-replace-existing-draft
        +
        +Create a new revision when filename conflict is detected
        +
        +When a file upload is cancelled or failed before completion, a draft will be 
        +created and the subsequent upload of the same file to the same location will be 
        +reported as a conflict.
        +
        +The value can also be set by --protondrive-replace-existing-draft=true
        +
        +If the option is set to true, the draft will be replaced and then the upload 
        +operation will restart. If there are other clients also uploading at the same 
        +file location at the same time, the behavior is currently unknown. Need to set 
        +to true for integration tests.
        +If the option is set to false, an error "a draft exist - usually this means a 
        +file is being uploaded at another client, or, there was a failed upload attempt" 
        +will be returned, and no upload will happen.
        +
        +Properties:
        +
        +- Config:      replace_existing_draft
        +- Env Var:     RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
        +- Type:        bool
        +- Default:     false
        +
        +#### --protondrive-enable-caching
        +
        +Caches the files and folders metadata to reduce API calls
        +
        +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, 
        +as the current implementation doesn't update or clear the cache when there are 
        +external changes. 
        +
        +The files and folders on ProtonDrive are represented as links with keyrings, 
        +which can be cached to improve performance and be friendly to the API server.
        +
        +The cache is currently built for the case when the rclone is the only instance 
        +performing operations to the mount point. The event system, which is the proton
        +API system that provides visibility of what has changed on the drive, is yet 
        +to be implemented, so updates from other clients won’t be reflected in the 
        +cache. Thus, if there are concurrent clients accessing the same mount point, 
        +then we might have a problem with caching the stale data.
        +
        +Properties:
        +
        +- Config:      enable_caching
        +- Env Var:     RCLONE_PROTONDRIVE_ENABLE_CACHING
        +- Type:        bool
        +- Default:     true
        +
        +
        +
        +## Limitations
        +
        +This backend uses the 
        +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which 
        +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a 
        +fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
        +
        +There is no official API documentation available from Proton Drive. But, thanks 
        +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) 
        +and the web, iOS, and Android client codebases, we don't need to completely 
        +reverse engineer the APIs by observing the web client traffic! 
        +
        +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic 
        +building blocks of API calls and error handling, such as 429 exponential 
        +back-off, but it is pretty much just a barebone interface to the Proton API. 
        +For example, the encryption and decryption of the Proton Drive file are not 
        +provided in this library. 
        +
        +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on 
        +top of this quickly. This codebase handles the intricate tasks before and after 
        +calling Proton APIs, particularly the complex encryption scheme, allowing 
        +developers to implement features for other software on top of this codebase. 
        +There are likely quite a few errors in this library, as there isn't official 
        +documentation available.
        +
        +#  put.io
        +
        +Paths are specified as `remote:path`
        +
        +put.io paths may be as deep as required, e.g.
        +`remote:directory/subdirectory`.
        +
        +## Configuration
        +
        +The initial setup for put.io involves getting a token from put.io
        +which you need to do in your browser.  `rclone config` walks you
        +through it.
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io  "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ **

        +

        Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes:

        +

        Name Type ==== ==== putio putio

        +
          +
        1. Edit existing remote
        2. +
        3. New remote
        4. +
        5. Delete remote
        6. +
        7. Rename remote
        8. +
        9. Copy remote
        10. +
        11. Set configuration password
        12. +
        13. Quit config e/n/d/r/c/s/q> q
        14. +
        +
        
        +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
        +machine with no Internet browser available.
        +
        +Note that rclone runs a webserver on your local machine to collect the
        +token as returned from put.io  if using web browser to automatically 
        +authenticate. This only
        +runs from the moment it opens your browser to the moment you get back
        +the verification code.  This is on `http://127.0.0.1:53682/` and this
        +it may require you to unblock it temporarily if you are running a host
        +firewall, or use manual mode.
        +
        +You can then use it like this,
        +
        +List directories in top level of your put.io
        +
        +    rclone lsd remote:
        +
        +List all the files in your put.io
        +
        +    rclone ls remote:
        +
        +To copy a local directory to a put.io directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| \         | 0x5C  | \           |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to putio (Put.io).
        +
        +#### --putio-client-id
        +
        +OAuth Client Id.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_id
        +- Env Var:     RCLONE_PUTIO_CLIENT_ID
        +- Type:        string
        +- Required:    false
        +
        +#### --putio-client-secret
        +
        +OAuth Client Secret.
        +
        +Leave blank normally.
        +
        +Properties:
        +
        +- Config:      client_secret
        +- Env Var:     RCLONE_PUTIO_CLIENT_SECRET
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to putio (Put.io).
        +
        +#### --putio-token
        +
        +OAuth Access Token as a JSON blob.
        +
        +Properties:
        +
        +- Config:      token
        +- Env Var:     RCLONE_PUTIO_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --putio-auth-url
        +
        +Auth server URL.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      auth_url
        +- Env Var:     RCLONE_PUTIO_AUTH_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --putio-token-url
        +
        +Token server url.
        +
        +Leave blank to use the provider defaults.
        +
        +Properties:
        +
        +- Config:      token_url
        +- Env Var:     RCLONE_PUTIO_TOKEN_URL
        +- Type:        string
        +- Required:    false
        +
        +#### --putio-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PUTIO_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
        +
        +
        +
        +## Limitations
        +
        +put.io has rate limiting. When you hit a limit, rclone automatically
        +retries after waiting the amount of time requested by the server.
        +
        +If you want to avoid ever hitting these limits, you may use the
        +`--tpslimit` flag with a low number. Note that the imposed limits
        +may be different for different operations, and may change over time.
        +
        +#  Proton Drive
        +
        +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
        + for your files that protects your data.
        +
        +This is an rclone backend for Proton Drive which supports the file transfer
        +features of Proton Drive using the same client-side encryption.
        +
        +Due to the fact that Proton Drive doesn't publish its API documentation, this 
        +backend is implemented with best efforts by reading the open-sourced client 
        +source code and observing the Proton Drive traffic in the browser.
        +
        +**NB** This backend is currently in Beta. It is believed to be correct
        +and all the integration tests pass. However the Proton Drive protocol
        +has evolved over time there may be accounts it is not compatible
        +with. Please [post on the rclone forum](https://forum.rclone.org/) if
        +you find an incompatibility.
        +
        +Paths are specified as `remote:path`
        +
        +Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
        +
        +## Configurations
        +
        +Here is an example of how to make a remote called `remote`.  First run:
        +
        +     rclone config
        +
        +This will guide you through an interactive setup process:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +**NOTE:** The Proton Drive encryption keys need to have been already generated 
        +after a regular login via the browser, otherwise attempting to use the 
        +credentials in `rclone` will fail.
        +
        +Once configured you can then use `rclone` like this,
        +
        +List directories in top level of your Proton Drive
        +
        +    rclone lsd remote:
        +
        +List all the files in your Proton Drive
        +
        +    rclone ls remote:
        +
        +To copy a local directory to an Proton Drive directory called backup
        +
        +    rclone copy /home/source remote:backup
        +
        +### Modified time
        +
        +Proton Drive Bridge does not support updating modification times yet.
        +
        +### Restricted filename characters
        +
        +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and 
        +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
        +
        +### Duplicated files
        +
        +Proton Drive can not have two files with exactly the same name and path. If the 
        +conflict occurs, depending on the advanced config, the file might or might not 
        +be overwritten.
        +
        +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
        +
        +Please set your mailbox password in the advanced config section.
        +
        +### Caching
        +
        +The cache is currently built for the case when the rclone is the only instance 
        +performing operations to the mount point. The event system, which is the proton
        +API system that provides visibility of what has changed on the drive, is yet 
        +to be implemented, so updates from other clients won’t be reflected in the 
        +cache. Thus, if there are concurrent clients accessing the same mount point, 
        +then we might have a problem with caching the stale data.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to protondrive (Proton Drive).
        +
        +#### --protondrive-username
        +
        +The username of your proton account
        +
        +Properties:
        +
        +- Config:      username
        +- Env Var:     RCLONE_PROTONDRIVE_USERNAME
        +- Type:        string
        +- Required:    true
        +
        +#### --protondrive-password
        +
        +The password of your proton account.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      password
        +- Env Var:     RCLONE_PROTONDRIVE_PASSWORD
        +- Type:        string
        +- Required:    true
        +
        +#### --protondrive-2fa
        +
        +The 2FA code
        +
        +The value can also be provided with --protondrive-2fa=000000
        +
        +The 2FA code of your proton drive account if the account is set up with 
        +two-factor authentication
        +
        +Properties:
        +
        +- Config:      2fa
        +- Env Var:     RCLONE_PROTONDRIVE_2FA
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to protondrive (Proton Drive).
        +
        +#### --protondrive-mailbox-password
        +
        +The mailbox password of your two-password proton account.
        +
        +For more information regarding the mailbox password, please check the 
        +following official knowledge base article: 
        +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
        +
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      mailbox_password
        +- Env Var:     RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-uid
        +
        +Client uid key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_uid
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_UID
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-access-token
        +
        +Client access token key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_access_token
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-refresh-token
        +
        +Client refresh token key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_refresh_token
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-client-salted-key-pass
        +
        +Client salted key pass key (internal use only)
        +
        +Properties:
        +
        +- Config:      client_salted_key_pass
        +- Env Var:     RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
        +- Type:        string
        +- Required:    false
        +
        +#### --protondrive-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_PROTONDRIVE_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
        +
        +#### --protondrive-original-file-size
        +
        +Return the file size before encryption
        +            
        +The size of the encrypted file will be different from (bigger than) the 
        +original file size. Unless there is a reason to return the file size 
        +after encryption is performed, otherwise, set this option to true, as 
        +features like Open() which will need to be supplied with original content 
        +size, will fail to operate properly
        +
        +Properties:
        +
        +- Config:      original_file_size
        +- Env Var:     RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
        +- Type:        bool
        +- Default:     true
        +
        +#### --protondrive-app-version
        +
        +The app version string 
        +
        +The app version string indicates the client that is currently performing 
        +the API request. This information is required and will be sent with every 
        +API request.
        +
        +Properties:
        +
        +- Config:      app_version
        +- Env Var:     RCLONE_PROTONDRIVE_APP_VERSION
        +- Type:        string
        +- Default:     "macos-drive@1.0.0-alpha.1+rclone"
        +
        +#### --protondrive-replace-existing-draft
        +
        +Create a new revision when filename conflict is detected
        +
        +When a file upload is cancelled or failed before completion, a draft will be 
        +created and the subsequent upload of the same file to the same location will be 
        +reported as a conflict.
        +
        +The value can also be set by --protondrive-replace-existing-draft=true
        +
        +If the option is set to true, the draft will be replaced and then the upload 
        +operation will restart. If there are other clients also uploading at the same 
        +file location at the same time, the behavior is currently unknown. Need to set 
        +to true for integration tests.
        +If the option is set to false, an error "a draft exist - usually this means a 
        +file is being uploaded at another client, or, there was a failed upload attempt" 
        +will be returned, and no upload will happen.
        +
        +Properties:
        +
        +- Config:      replace_existing_draft
        +- Env Var:     RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
        +- Type:        bool
        +- Default:     false
        +
        +#### --protondrive-enable-caching
        +
        +Caches the files and folders metadata to reduce API calls
        +
        +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, 
        +as the current implementation doesn't update or clear the cache when there are 
        +external changes. 
        +
        +The files and folders on ProtonDrive are represented as links with keyrings, 
        +which can be cached to improve performance and be friendly to the API server.
        +
        +The cache is currently built for the case when the rclone is the only instance 
        +performing operations to the mount point. The event system, which is the proton
        +API system that provides visibility of what has changed on the drive, is yet 
        +to be implemented, so updates from other clients won’t be reflected in the 
        +cache. Thus, if there are concurrent clients accessing the same mount point, 
        +then we might have a problem with caching the stale data.
        +
        +Properties:
        +
        +- Config:      enable_caching
        +- Env Var:     RCLONE_PROTONDRIVE_ENABLE_CACHING
        +- Type:        bool
        +- Default:     true
        +
        +
        +
        +## Limitations
        +
        +This backend uses the 
        +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which 
        +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a 
        +fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
        +
        +There is no official API documentation available from Proton Drive. But, thanks 
        +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) 
        +and the web, iOS, and Android client codebases, we don't need to completely 
        +reverse engineer the APIs by observing the web client traffic! 
        +
        +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic 
        +building blocks of API calls and error handling, such as 429 exponential 
        +back-off, but it is pretty much just a barebone interface to the Proton API. 
        +For example, the encryption and decryption of the Proton Drive file are not 
        +provided in this library. 
        +
        +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on 
        +top of this quickly. This codebase handles the intricate tasks before and after 
        +calling Proton APIs, particularly the complex encryption scheme, allowing 
        +developers to implement features for other software on top of this codebase. 
        +There are likely quite a few errors in this library, as there isn't official 
        +documentation available.
        +
        +#  Seafile
        +
        +This is a backend for the [Seafile](https://www.seafile.com/) storage service:
        +- It works with both the free community edition or the professional edition.
        +- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
        +- Encrypted libraries are also supported.
        +- It supports 2FA enabled users
        +- Using a Library API Token is **not** supported
        +
        +## Configuration
        +
        +There are two distinct modes you can setup your remote:
        +- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
        +Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
        +- you point your remote to a specific library during the configuration:
        +Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)
        +
        +### Configuration in root mode
        +
        +Here is an example of making a seafile configuration for a user with **no** two-factor authentication.  First run
        +
        +    rclone config
        +
        +This will guide you through an interactive setup process. To authenticate
        +you will need the URL of your server, your email (or username) and your password.
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile  "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **

        +

        URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this:
        +
        +See all libraries
        +
        +    rclone lsd seafile:
        +
        +Create a new library
        +
        +    rclone mkdir seafile:library
        +
        +List the contents of a library
        +
        +    rclone ls seafile:library
        +
        +Sync `/home/local/directory` to the remote library, deleting any
        +excess files in the library.
        +
        +    rclone sync --interactive /home/local/directory seafile:library
        +
        +### Configuration in library mode
        +
        +Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile  "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **

        +

        URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
        +
        +You specified `My Library` during the configuration. The root of the remote is pointing at the
        +root of the library `My Library`:
        +
        +See all files in the library:
        +
        +    rclone lsd seafile:
        +
        +Create a new directory inside the library
        +
        +    rclone mkdir seafile:directory
        +
        +List the contents of a directory
        +
        +    rclone ls seafile:directory
        +
        +Sync `/home/local/directory` to the remote library, deleting any
        +excess files in the library.
        +
        +    rclone sync --interactive /home/local/directory seafile:
        +
        +
        +### --fast-list
        +
        +Seafile version 7+ supports `--fast-list` which allows you to use fewer
        +transactions in exchange for more memory. See the [rclone
        +docs](https://rclone.org/docs/#fast-list) for more details.
        +Please note this is not supported on seafile server version 6.x
        +
        +
        +### Restricted filename characters
        +
        +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
        +the following characters are also replaced:
        +
        +| Character | Value | Replacement |
        +| --------- |:-----:|:-----------:|
        +| /         | 0x2F  | /          |
        +| "         | 0x22  | "          |
        +| \         | 0x5C  | \           |
        +
        +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
        +as they can't be used in JSON strings.
        +
        +### Seafile and rclone link
        +
        +Rclone supports generating share links for non-encrypted libraries only.
        +They can either be for a file or a directory:
        +
        +

        rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/

        +
        
        +or if run on a directory you will get:
        +
        +

        rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/

        +
        
        +Please note a share link is unique for each file or directory. If you run a link command on a file/dir
        +that has already been shared, you will get the exact same link.
        +
        +### Compatibility
        +
        +It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
        +- 6.3.4 community edition
        +- 7.0.5 community edition
        +- 7.1.3 community edition
        +- 9.0.10 community edition
        +
        +Versions below 6.0 are not supported.
        +Versions between 6.0 and 6.3 haven't been tested and might not work properly.
        +
        +Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server.
        +
        +
        +### Standard options
        +
        +Here are the Standard options specific to seafile (seafile).
        +
        +#### --seafile-url
        +
        +URL of seafile host to connect to.
        +
        +Properties:
        +
        +- Config:      url
        +- Env Var:     RCLONE_SEAFILE_URL
        +- Type:        string
        +- Required:    true
        +- Examples:
        +    - "https://cloud.seafile.com/"
        +        - Connect to cloud.seafile.com.
        +
        +#### --seafile-user
        +
        +User name (usually email address).
        +
        +Properties:
        +
        +- Config:      user
        +- Env Var:     RCLONE_SEAFILE_USER
        +- Type:        string
        +- Required:    true
        +
        +#### --seafile-pass
        +
        +Password.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      pass
        +- Env Var:     RCLONE_SEAFILE_PASS
        +- Type:        string
        +- Required:    false
        +
        +#### --seafile-2fa
        +
        +Two-factor authentication ('true' if the account has 2FA enabled).
        +
        +Properties:
        +
        +- Config:      2fa
        +- Env Var:     RCLONE_SEAFILE_2FA
        +- Type:        bool
        +- Default:     false
        +
        +#### --seafile-library
        +
        +Name of the library.
        +
        +Leave blank to access all non-encrypted libraries.
        +
        +Properties:
        +
        +- Config:      library
        +- Env Var:     RCLONE_SEAFILE_LIBRARY
        +- Type:        string
        +- Required:    false
        +
        +#### --seafile-library-key
        +
        +Library password (for encrypted libraries only).
        +
        +Leave blank if you pass it through the command line.
        +
        +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
        +
        +Properties:
        +
        +- Config:      library_key
        +- Env Var:     RCLONE_SEAFILE_LIBRARY_KEY
        +- Type:        string
        +- Required:    false
        +
        +#### --seafile-auth-token
        +
        +Authentication token.
        +
        +Properties:
        +
        +- Config:      auth_token
        +- Env Var:     RCLONE_SEAFILE_AUTH_TOKEN
        +- Type:        string
        +- Required:    false
        +
        +### Advanced options
        +
        +Here are the Advanced options specific to seafile (seafile).
        +
        +#### --seafile-create-library
        +
        +Should rclone create a library if it doesn't exist.
        +
        +Properties:
        +
        +- Config:      create_library
        +- Env Var:     RCLONE_SEAFILE_CREATE_LIBRARY
        +- Type:        bool
        +- Default:     false
        +
        +#### --seafile-encoding
        +
        +The encoding for the backend.
        +
        +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
        +
        +Properties:
        +
        +- Config:      encoding
        +- Env Var:     RCLONE_SEAFILE_ENCODING
        +- Type:        MultiEncoder
        +- Default:     Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
        +
        +
        +
        +#  SFTP
        +
        +SFTP is the [Secure (or SSH) File Transfer
        +Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
        +
        +The SFTP backend can be used with a number of different providers:
        +
        +
        +- Hetzner Storage Box
        +- rsync.net
        +
        +
        +SFTP runs over SSH v2 and is installed as standard with most modern
        +SSH installations.
        +
        +Paths are specified as `remote:path`. If the path does not begin with
        +a `/` it is relative to the home directory of the user.  An empty path
        +`remote:` refers to the user's home directory. For example, `rclone lsd remote:` 
        +would list the home directory of the user configured in the rclone remote config 
        +(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root 
        +directory for remote machine (i.e. `/`)
        +
        +Note that some SFTP servers will need the leading / - Synology is a
        +good example of this. rsync.net and Hetzner, on the other hand, requires users to
        +OMIT the leading /.
        +
        +Note that by default rclone will try to execute shell commands on
        +the server, see [shell access considerations](#shell-access-considerations).
        +
        +## Configuration
        +
        +Here is an example of making an SFTP configuration.  First run
        +
        +    rclone config
        +
        +This will guide you through an interactive setup process.
        +
        +

        No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP  "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com  "example.com" host> example.com SSH username Enter a string value. Press Enter for the default ("$USER"). user> sftpuser SSH port number Enter a signed integer. Press Enter for the default (22). port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

        +
        
        +This remote is called `remote` and can now be used like this:
        +
        +See all directories in the home directory
        +
        +    rclone lsd remote:
        +
        +See all directories in the root directory
        +
        +    rclone lsd remote:/
        +
        +Make a new directory
        +
        +    rclone mkdir remote:path/to/directory
        +
        +List the contents of a directory
        +
        +    rclone ls remote:path/to/directory
        +
        +Sync `/home/local/directory` to the remote directory, deleting any
        +excess files in the directory.
        +
        +    rclone sync --interactive /home/local/directory remote:directory
        +
        +Mount the remote path `/srv/www-data/` to the local path
        +`/mnt/www-data`
        +
        +    rclone mount remote:/srv/www-data/ /mnt/www-data
        +
        +### SSH Authentication
        +
        +The SFTP remote supports three authentication methods:
        +
        +  * Password
        +  * Key file, including certificate signed keys
        +  * ssh-agent
        +
        +Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`.
        +Only unencrypted OpenSSH or PEM encrypted files are supported.
        +
        +The key file can be specified in either an external file (key_file) or contained within the 
        +rclone config file (key_pem).  If using key_pem in the config file, the entry should be on a
        +single line with new line ('\n' or '\r\n') separating lines.  i.e.
        +
        +    key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
        +
        +This will generate it correctly for key_pem for use in the config:
        +
        +    awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
        +
        +If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then
        +rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent`
        +to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can
        +also be specified to force the usage of a specific key in the ssh-agent.
        +
        +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
        +
        +If you set the `ask_password` option, rclone will prompt for a password when
        +needed and no password has been configured.
        +
        +#### Certificate-signed keys
        +
        +With traditional key-based authentication, you configure your private key only,
        +and the public key built into it will be used during the authentication process.
        +
        +If you have a certificate you may use it to sign your public key, creating a
        +separate SSH user certificate that should be used instead of the plain public key
        +extracted from the private key. Then you must provide the path to the
        +user certificate public key file in `pubkey_file`.
        +
        +Note: This is not the traditional public key paired with your private key,
        +typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in
        +`pubkey_file` will not work.
        +
        +Example:
        +
        +

        [remote] type = sftp host = example.com user = sftpuser key_file = ~/id_rsa pubkey_file = ~/id_rsa-cert.pub

        +
        
        +If you concatenate a cert with a private key then you can specify the
        +merged file in both places.
        +
        +Note: the cert must come first in the file.  e.g.
        +
        +```
        +cat id_rsa-cert.pub id_rsa > merged_key
        +```
        +
        +### Host key validation
        +
        +By default rclone will not check the server's host key for validation.  This
        +can allow an attacker to replace a server with their own and if you use
        +password authentication then this can lead to that password being exposed.
        +
        +Host key matching, using standard `known_hosts` files can be turned on by
        +enabling the `known_hosts_file` option.  This can point to the file maintained
        +by `OpenSSH` or can point to a unique file.
        +
        +e.g. using the OpenSSH `known_hosts` file:
        +
        +```
         [remote]
        -host = example.com
        -user = sftpuser
        -port =
        -pass =
        -key_file =
        ---------------------
        -y) Yes this is OK
        -e) Edit this remote
        -d) Delete this remote
        -y/e/d> y
        -

        This remote is called remote and can now be used like this:

        -

        See all directories in the home directory

        -
        rclone lsd remote:
        -

        See all directories in the root directory

        -
        rclone lsd remote:/
        -

        Make a new directory

        -
        rclone mkdir remote:path/to/directory
        -

        List the contents of a directory

        -
        rclone ls remote:path/to/directory
        -

        Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

        -
        rclone sync --interactive /home/local/directory remote:directory
        -

        Mount the remote path /srv/www-data/ to the local path /mnt/www-data

        -
        rclone mount remote:/srv/www-data/ /mnt/www-data
        -

        SSH Authentication

        -

        The SFTP remote supports three authentication methods:

        -
          -
        • Password
        • -
        • Key file, including certificate signed keys
        • -
        • ssh-agent
        • -
        -

        Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.

        -

        The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.

        -
        key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
        -

        This will generate it correctly for key_pem for use in the config:

        -
        awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
        -

        If you don't specify pass, key_file, or key_pem or ask_password then rclone will attempt to contact an ssh-agent. You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

        -

        Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

        -

        If you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.

        -

        Certificate-signed keys

        -

        With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.

        -

        If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.

        -

        Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.

        -

        Example:

        -
        [remote]
        -type = sftp
        -host = example.com
        -user = sftpuser
        -key_file = ~/id_rsa
        -pubkey_file = ~/id_rsa-cert.pub
        -

        If you concatenate a cert with a private key then you can specify the merged file in both places.

        -

        Note: the cert must come first in the file. e.g.

        -
        cat id_rsa-cert.pub id_rsa > merged_key
        -

        Host key validation

        -

        By default rclone will not check the server's host key for validation. This can allow an attacker to replace a server with their own and if you use password authentication then this can lead to that password being exposed.

        -

        Host key matching, using standard known_hosts files can be turned on by enabling the known_hosts_file option. This can point to the file maintained by OpenSSH or can point to a unique file.

        -

        e.g. using the OpenSSH known_hosts file:

        -
        [remote]
         type = sftp
         host = example.com
         user = sftpuser
        @@ -28013,21 +33450,21 @@ known_hosts_file = ~/.ssh/known_hosts

        Shell access considerations

        The shell type auto-detection logic, described above, means that by default rclone will try to run a shell command the first time a new sftp remote is accessed. If you configure a sftp remote without a config file, e.g. an on the fly remote, rclone will have nowhere to store the result, and it will re-run the command on every access. To avoid this you should explicitly set the shell_type option to the correct value, or to none if you want to prevent rclone from executing any remote shell commands.

        It is also important to note that, since the shell type decides how quoting and escaping of file paths used as command-line arguments are performed, configuring the wrong shell type may leave you exposed to command injection exploits. Make sure to confirm the auto-detected shell type, or explicitly set the shell type you know is correct, or disable shell access until you know.

        -

        Checksum

        +

        Checksum

        SFTP does not natively support checksums (file hash), but rclone is able to use checksumming if the same login has shell access, and can execute remote commands. If there is a command that can calculate compatible checksums on the remote system, Rclone can then be configured to execute this whenever a checksum is needed, and read back the results. Currently MD5 and SHA-1 are supported.

        Normally this requires an external utility being available on the server. By default rclone will try commands md5sum, md5 and rclone md5sum for MD5 checksums, and the first one found usable will be picked. Same with sha1sum, sha1 and rclone sha1sum commands for SHA-1 checksums. These utilities normally need to be in the remote's PATH to be found.

        In some cases the shell itself is capable of calculating checksums. PowerShell is an example of such a shell. If rclone detects that the remote shell is PowerShell, which means it most probably is a Windows OpenSSH server, rclone will use a predefined script block to produce the checksums when no external checksum commands are found (see shell access). This assumes PowerShell version 4.0 or newer.

        The options md5sum_command and sha1_command can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum as the value of option md5sum_command to make sure a specific executable is used.

        Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command or sha1_command are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.

        Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming entirely, or set shell_type to none to disable all functionality based on remote shell command execution.

        -

        Modified time

        +

        Modified time

        Modified times are stored on the server to 1 second precision.

        Modified times are used in syncing and are fully supported.

        Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

        About command

        The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.

        SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.

        -

        Standard options

        +

        Standard options

        Here are the Standard options specific to sftp (SSH/SFTP).

        --sftp-host

        SSH host to connect to.

        @@ -28161,7 +33598,24 @@ known_hosts_file = ~/.ssh/known_hosts
      3. Type: bool
      4. Default: false
      5. -

        Advanced options

        +

        --sftp-ssh

        +

        Path and arguments to external ssh binary.

        +

        Normally rclone will use its internal ssh library to connect to the SFTP server. However it does not implement all possible ssh options so it may be desirable to use an external ssh binary.

        +

        Rclone ignores all the internal config if you use this option and expects you to configure the ssh binary with the user/host/port and any other options you need.

        +

        Important The ssh command must log in without asking for a password so needs to be configured with keys or certificates.

        +

        Rclone will run the command supplied either with the additional arguments "-s sftp" to access the SFTP subsystem or with commands such as "md5sum /path/to/file" appended to read checksums.

        +

        Any arguments with spaces in should be surrounded by "double quotes".

        +

        An example setting might be:

        +
        ssh -o ServerAliveInterval=20 user@example.com
        +

        Note that when using an external ssh binary rclone makes a new ssh connection for every hash it calculates.

        +

        Properties:

        +
          +
        • Config: ssh
        • +
        • Env Var: RCLONE_SFTP_SSH
        • +
        • Type: SpaceSepList
        • +
        • Default:
        • +
        +

        Advanced options

        Here are the Advanced options specific to sftp (SSH/SFTP).

        --sftp-known-hosts-file

        Optional path to known_hosts file.

        @@ -28198,6 +33652,12 @@ known_hosts_file = ~/.ssh/known_hosts
        rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory

        E.g. if home directory can be found in a shared folder called "home":

        rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
        +

        To specify only the path to the SFTP remote's root, and allow rclone to add any relative subpaths automatically (including unwrapping/decrypting remotes as necessary), add the '@' character to the beginning of the path.

        +

        E.g. the first example above could be rewritten as:

        +
        rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2
        +

        Note that when using this method with Synology "home" folders, the full "/homes/USER" path should be specified instead of "/home".

        +

        E.g. the second example above should be rewritten as:

        +
        rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1

        Properties:

        • Config: path_override
        • @@ -28284,6 +33744,11 @@ known_hosts_file = ~/.ssh/known_hosts

      --sftp-server-command

      Specifies the path or command to run a sftp server on the remote host.

      The subsystem option is ignored when server_command is defined.

      +

      If adding server_command to the configuration file please note that it should not be enclosed in quotes, since that will make rclone fail.

      +

      A working example is:

      +
      [remote_name]
      +type = sftp
      +server_command = sudo /usr/libexec/openssh/sftp-server

      Properties:

      -

      Limitations

      +

      --sftp-socks-proxy

      +

      Socks 5 proxy host.

      +

      Supports the format user:pass@host:port, user@host:port, host:port.

      +

      Example:

      +
      myUser:myPass@localhost:9005
      +

      Properties:

      + +

      Limitations

      On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

      The only ssh agent supported under Windows is Putty's pageant.

      The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found in this paper.

      @@ -28443,11 +33920,11 @@ known_hosts_file = ~/.ssh/known_hosts

      SMB is a communication protocol to share files over network.

      This relies on go-smb2 library for communication with SMB protocol.

      Paths are specified as remote:sharename (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.

      -

      Notes

      +

      Notes

      The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).

      You can't access to the shared printers from rclone, obviously.

      You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.

      -

      Configuration

      +

      Configuration

      Here is an example of making a SMB configuration.

      First run

      rclone config
      @@ -28522,7 +33999,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> d -

      Standard options

      +

      Standard options

      Here are the Standard options specific to smb (SMB / CIFS).

      --smb-host

      SMB server hostname to connect to.

      @@ -28583,7 +34060,7 @@ y/e/d> d
    17. Type: string
    18. Required: false
    19. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to smb (SMB / CIFS).

      --smb-idle-timeout

      Max time before closing idle connections.

      @@ -28684,7 +34161,7 @@ y/e/d> d
    20. S3 backend: secret encryption key is shared with the gateway
    21. -

      Configuration

      +

      Configuration

      To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.

      Here is an example of how to make a remote called remote. First run:

       rclone config
      @@ -28781,7 +34258,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

      Standard options

      +

      Standard options

      Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

      --storj-provider

      Choose an authentication method.

      @@ -28860,7 +34337,7 @@ y/e/d> y
    22. Type: string
    23. Required: false
    24. -

      Usage

      +

      Usage

      Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

      Once configured you can then use rclone like this.

      Create a new bucket

      @@ -28914,15 +34391,15 @@ y/e/d> y
      rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

      Or even between another cloud storage and Storj.

      rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
      -

      Limitations

      +

      Limitations

      rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      See List of backends that do not support rclone about and rclone about

      -

      Known issues

      +

      Known issues

      If you get errors like too many open files this usually happens when the default ulimit for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).

      To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.

      SugarSync

      SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

      -

      Configuration

      +

      Configuration

      The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

       rclone config
      @@ -28987,15 +34464,15 @@ y/e/d> y

      Paths are specified as remote:path

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.

      -

      Modified time and hashes

      +

      Modified time and hashes

      SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.

      -

      Restricted filename characters

      +

      Restricted filename characters

      SugarSync replaces the default restricted characters set except for DEL.

      Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

      -

      Deleting files

      +

      Deleting files

      Deleted files will be moved to the "Deleted items" folder by default.

      However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to sugarsync (Sugarsync).

      --sugarsync-app-id

      Sugarsync App ID.

      @@ -29036,7 +34513,7 @@ y/e/d> y
    25. Type: bool
    26. Default: false
    27. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to sugarsync (Sugarsync).

      --sugarsync-refresh-token

      Sugarsync refresh token.

      @@ -29108,7 +34585,7 @@ y/e/d> y
    28. Type: MultiEncoder
    29. Default: Slash,Ctl,InvalidUtf8,Dot
    30. -

      Limitations

      +

      Limitations

      rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      See List of backends that do not support rclone about and rclone about

      Tardigrade

      @@ -29117,7 +34594,7 @@ y/e/d> y

      This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.

      Paths are specified as remote:path

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings

      Here is an example of how to make a remote called remote with the default setup. First run:

      rclone config
      @@ -29171,9 +34648,9 @@ y/e/d>
      rclone ls remote:

      To copy a local directory to an Uptobox directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time.

      -

      Restricted filename characters

      +

      Restricted filename characters

      In addition to the default restricted characters set the following characters are also replaced:

      @@ -29197,7 +34674,7 @@ y/e/d>

      Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to uptobox (Uptobox).

      --uptobox-access-token

      Your access token.

      @@ -29209,7 +34686,7 @@ y/e/d>
    31. Type: string
    32. Required: false
    33. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to uptobox (Uptobox).

      --uptobox-private

      Set to make uploaded files private

      @@ -29230,17 +34707,21 @@ y/e/d>
    34. Type: MultiEncoder
    35. Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
    36. -

      Limitations

      +

      Limitations

      Uptobox will delete inactive files that have not been accessed in 60 days.

      rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.

      Union

      -

      The union remote provides a unification similar to UnionFS using other remotes.

      -

      Paths may be as deep as required or a local path, e.g. remote:directory/subdirectory or /directory/subdirectory.

      +

      The union backend joins several remotes together to make a single unified view of them.

      During the initial setup with rclone config you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

      -

      Attribute :ro and :nc can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

      +

      The attributes :ro, :nc and :nc can be attached to the end of the remote to tag the remote as read only, no create or writeback, e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

      +

      Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

      -

      There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

      -

      Configuration

      +

      There is no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

      +

      Configuration

      Here is an example of how to make a union called remote for local folders. First run:

       rclone config

      This will guide you through an interactive setup process:

      @@ -29461,7 +34942,19 @@ e/n/d/r/c/s/q> q -

      Standard options

      +

      Writeback

      +

      The tag :writeback on an upstream remote can be used to make a simple cache system like this:

      +
      [union]
      +type = union
      +action_policy = all
      +create_policy = all
      +search_policy = ff
      +upstreams = /local:writeback remote:dir
      +

      When files are opened for read, if the file is in remote:dir but not /local then rclone will copy the file entirely into /local before returning a reference to the file in /local. The copy will be done with the equivalent of rclone copy so will use --multi-thread-streams if configured. Any copies will be logged with an INFO log.

      +

      When files are written, they will be written to both remote:dir and /local.

      +

      As many remotes as desired can be added to upstreams but there should only be one :writeback tag.

      +

      Rclone does not manage the :writeback remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.

      +

      Standard options

      Here are the Standard options specific to union (Union merges the contents of several upstream fs).

      --union-upstreams

      List of space separated upstreams.

      @@ -29510,7 +35003,7 @@ e/n/d/r/c/s/q> q
    37. Type: int
    38. Default: 120
    39. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

      --union-min-free-space

      Minimum viable free space for lfs/eplfs policies.

      @@ -29522,13 +35015,13 @@ e/n/d/r/c/s/q> q
    40. Type: SizeSuffix
    41. Default: 1Gi
    42. -

      Metadata

      +

      Metadata

      Any metadata supported by the underlying remote is read and written.

      See the metadata docs for more info.

      WebDAV

      Paths are specified as remote:path

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

      Here is an example of how to make a remote called remote. First run:

       rclone config
      @@ -29600,10 +35093,10 @@ y/e/d> y
      rclone ls remote:

      To copy a local directory to an WebDAV directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.

      Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to webdav (WebDAV).

      --webdav-url

      URL of http host to connect to.

      @@ -29680,7 +35173,7 @@ y/e/d> y
    43. Type: string
    44. Required: false
    45. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to webdav (WebDAV).

      --webdav-bearer-token-command

      Command to run to get a bearer token.

      @@ -29810,7 +35303,7 @@ vendor = other bearer_token_command = oidc-token XDC

      Yandex Disk

      Yandex Disk is a cloud storage solution created by Yandex.

      -

      Configuration

      +

      Configuration

      Here is an example of making a yandex configuration. First run

      rclone config

      This will guide you through an interactive setup process:

      @@ -29864,18 +35357,18 @@ y/e/d> y

      Sync /home/local/directory to the remote path, deleting any excess files in the path.

      rclone sync --interactive /home/local/directory remote:directory

      Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Modified time

      +

      Modified time

      Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

      MD5 checksums

      MD5 checksums are natively supported by Yandex Disk.

      -

      Emptying Trash

      +

      Emptying Trash

      If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

      -

      Quota information

      +

      Quota information

      To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

      -

      Restricted filename characters

      +

      Restricted filename characters

      The default restricted characters set are replaced.

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to yandex (Yandex Disk).

      --yandex-client-id

      OAuth Client Id.

      @@ -29897,7 +35390,7 @@ y/e/d> y
    46. Type: string
    47. Required: false
    48. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to yandex (Yandex Disk).

      --yandex-token

      OAuth Access Token as a JSON blob.

      @@ -29947,13 +35440,13 @@ y/e/d> y
    49. Type: MultiEncoder
    50. Default: Slash,Del,Ctl,InvalidUtf8,Dot
    51. -

      Limitations

      +

      Limitations

      When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

      Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.

      [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

      Zoho Workdrive

      Zoho WorkDrive is a cloud storage solution created by Zoho.

      -

      Configuration

      +

      Configuration

      Here is an example of making a zoho configuration. First run

      rclone config

      This will guide you through an interactive setup process:

      @@ -30026,15 +35519,15 @@ y/e/d>

      Sync /home/local/directory to the remote path, deleting any excess files in the path.

      rclone sync --interactive /home/local/directory remote:directory

      Zoho paths may be as deep as required, eg remote:directory/subdirectory.

      -

      Modified time

      +

      Modified time

      Modified times are currently not supported for Zoho Workdrive

      Checksums

      No checksums are supported.

      -

      Usage information

      +

      Usage information

      To view your current quota you can use the rclone about remote: command which will display your current usage.

      -

      Restricted filename characters

      +

      Restricted filename characters

      Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

      -

      Standard options

      +

      Standard options

      Here are the Standard options specific to zoho (Zoho).

      --zoho-client-id

      OAuth Client Id.

      @@ -30093,7 +35586,7 @@ y/e/d> -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to zoho (Zoho).

      --zoho-token

      OAuth Access Token as a JSON blob.

      @@ -30146,9 +35639,9 @@ y/e/d>

      Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

      rclone sync --interactive /home/source /tmp/destination

      Will sync /home/source to /tmp/destination.

      -

      Configuration

      +

      Configuration

      For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.

      -

      Modified time

      +

      Modified time

      Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

      Filenames

      Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.

      @@ -30517,7 +36010,7 @@ $ tree /tmp/b 0 file2

      NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

      NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.

      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to local (Local Disk).

      --local-nounc

      Disable UNC (long path names) conversion on Windows.

      @@ -30680,7 +36173,7 @@ $ tree /tmp/b
    52. Type: MultiEncoder
    53. Default: Slash,Dot
    54. -

      Metadata

      +

      Metadata

      Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. It is not supported on Windows yet (see pkg/attrs#47).

      User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix.

      Here are the possible system metadata items for the local backend.

      @@ -30754,7 +36247,7 @@ $ tree /tmp/b

      See the metadata docs for more info.

      -

      Backend commands

      +

      Backend commands

      Here are the commands specific to the local backend.

      Run them with

      rclone backend COMMAND remote:
      @@ -30770,7 +36263,246 @@ $ tree /tmp/b
    55. "echo": echo the input arguments
    56. "error": return an error based on option value
    57. -

      Changelog

      +

      Changelog

      +

      v1.64.0 - 2023-09-11

      +

      See commits

      + +

      v1.63.1 - 2023-07-17

      +

      See commits

      +

      v1.63.0 - 2023-06-30

      See commits

      Bugs and Limitations

      -

      Limitations

      +

      Limitations

      Directory timestamps aren't preserved

      Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.

      Rclone struggles with millions of files in a directory/bucket

      @@ -38242,7 +43974,6 @@ THE SOFTWARE.
    58. Chris Nelson
    59. Felix Bünemann
    60. Atílio Antônio
    61. -
    62. Roberto Ricci
    63. Carlo Mion
    64. Chris Lu
    65. Vitor Arruda
    66. @@ -38439,6 +44170,42 @@ THE SOFTWARE.
    67. Peter Fern
    68. zzq
    69. mac-15
    70. +
    71. Sawada Tsunayoshi
    72. +
    73. Dean Attali
    74. +
    75. Fjodor42
    76. +
    77. BakaWang
    78. +
    79. Mahad
    80. +
    81. Vladislav Vorobev
    82. +
    83. darix
    84. +
    85. Benjamin
    86. +
    87. Chun-Hung Tseng
    88. +
    89. Ricardo D'O. Albanus
    90. +
    91. gabriel-suela
    92. +
    93. Tiago Boeing
    94. +
    95. Edwin Mackenzie-Owen
    96. +
    97. Niklas Hambüchen
    98. +
    99. yuudi
    100. +
    101. Zach
    102. +
    103. nielash
    104. +
    105. Julian Lepinski
    106. +
    107. Raymond Berger
    108. +
    109. Nihaal Sangha
    110. +
    111. Masamune3210
    112. +
    113. James Braza
    114. +
    115. antoinetran
    116. +
    117. alexia
    118. +
    119. nielash
    120. +
    121. Vitor Gomes
    122. +
    123. Jacob Hands
    124. +
    125. hideo aoyama
    126. +
    127. Roberto Ricci
    128. +
    129. Bjørn Smith
    130. +
    131. Alishan Ladhani
    132. +
    133. zjx20
    134. +
    135. Oksana
    136. +
    137. Volodymyr Kit
    138. +
    139. David Pedersen
    140. +
    141. Drew Stinnett
    142. Contact the rclone project

      Forum

      @@ -38446,6 +44213,12 @@ THE SOFTWARE. +

      Business support

      +

      For business support or sponsorship enquiries please see:

      +

      GitHub repository

      The project's repository is located at:

      There you can file bug reports or contribute with pull requests.

      Twitter

      -

      You can also follow me on twitter for rclone announcements:

      +

      You can also follow Nick on twitter for rclone announcements:

      Email

      -

      Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don't email me requests for help - those are better directed to the forum. Thanks!

      +

      Or if all else fails or you want to ask something private or confidential

      + +

      Please don't email requests for help to this address - those are better directed to the forum unless you'd like to sign up for business support.

      diff --git a/MANUAL.md b/MANUAL.md index 47abfb346..9b4c2e21d 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Jun 30, 2023 +% Sep 11, 2023 # Rclone syncs your files to cloud storage @@ -18,7 +18,7 @@ Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage -interfaces. [Over 40 cloud storage products](#providers) support +interfaces. [Over 70 cloud storage products](#providers) support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. @@ -133,6 +133,7 @@ WebDAV or S3, that work out of the box.) - IDrive e2 - IONOS Cloud - Koofr +- Leviia Object Storage - Liara Object Storage - Mail.ru Cloud - Memset Memstore @@ -154,8 +155,10 @@ WebDAV or S3, that work out of the box.) - PikPak - premiumize.me - put.io +- Proton Drive - QingStor - Qiniu Cloud Object Storage (Kodo) +- Quatrix by Maytech - Rackspace Cloud Files - rsync.net - Scaleway @@ -167,6 +170,7 @@ WebDAV or S3, that work out of the box.) - SMB / CIFS - StackPath - Storj +- Synology - SugarSync - Tencent Cloud Object Storage (COS) - Uptobox @@ -217,6 +221,9 @@ run `rclone -h`. Already installed rclone can be easily updated to the latest version using the [rclone selfupdate](https://rclone.org/commands/rclone_selfupdate/) command. +See [the release signing docs](https://rclone.org/release_signing/) for how to verify +signatures on the release. + ## Script installation To install rclone on Linux/macOS/BSD systems, run: @@ -485,6 +492,28 @@ docker run --rm \ ls ~/data/mount kill %1 ``` +## Snap installation {#snap} + +[![Get it from the Snap Store](https://snapcraft.io/static/images/badges/en/snap-store-black.svg)](https://snapcraft.io/rclone) + +Make sure you have [Snapd installed](https://snapcraft.io/docs/installing-snapd) + +```bash +$ sudo snap install rclone +``` +Due to the strict confinement of Snap, rclone snap cannot acess real /home/$USER/.config/rclone directory, default config path is as below. + +- Default config directory: + - /home/$USER/snap/rclone/current/.config/rclone + +Note: Due to the strict confinement of Snap, `rclone mount` feature is `not` supported. + +If mounting is wanted, either install a precompiled binary or enable the relevant option when [installing from source](#source). + +Note that this is controlled by [community maintainer](https://github.com/boukendesho/rclone-snap) not the rclone developers so it may be out of date. Its current version is as below. + +[![rclone](https://snapcraft.io/rclone/badge.svg)](https://snapcraft.io/rclone) + ## Source installation {#source} @@ -824,7 +853,9 @@ See the following for detailed instructions for * [PikPak](https://rclone.org/pikpak/) * [premiumize.me](https://rclone.org/premiumizeme/) * [put.io](https://rclone.org/putio/) + * [Proton Drive](https://rclone.org/protondrive/) * [QingStor](https://rclone.org/qingstor/) + * [Quatrix by Maytech](https://rclone.org/quatrix/) * [Seafile](https://rclone.org/seafile/) * [SFTP](https://rclone.org/sftp/) * [Sia](https://rclone.org/sia/) @@ -886,20 +917,23 @@ rclone config [flags] -h, --help help for config ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](https://rclone.org/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote. * [rclone config disconnect](https://rclone.org/commands/rclone_config_disconnect/) - Disconnects user from remote * [rclone config dump](https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON. +* [rclone config edit](https://rclone.org/commands/rclone_config_edit/) - Enter an interactive configuration session. * [rclone config file](https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use. * [rclone config password](https://rclone.org/commands/rclone_config_password/) - Update password in an existing remote. * [rclone config paths](https://rclone.org/commands/rclone_config_paths/) - Show paths used for configuration, cache, temp etc. * [rclone config providers](https://rclone.org/commands/rclone_config_providers/) - List in JSON format all the providers and options. * [rclone config reconnect](https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote. +* [rclone config redacted](https://rclone.org/commands/rclone_config_redacted/) - Print redacted (decrypted) config file, or the redacted config for a single remote. * [rclone config show](https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config touch](https://rclone.org/commands/rclone_config_touch/) - Ensure configuration file exists. * [rclone config update](https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote. @@ -980,9 +1014,95 @@ rclone copy source:path dest:path [flags] -h, --help help for copy ``` + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1040,9 +1160,113 @@ rclone sync source:path dest:path [flags] -h, --help help for sync ``` + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Sync Options + +Flags just used for `rclone sync`. + +``` + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1096,9 +1320,95 @@ rclone move source:path dest:path [flags] -h, --help help for move ``` + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1148,9 +1458,58 @@ rclone delete remote:path [flags] --rmdirs rmdirs removes empty directories but leaves root intact ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1181,9 +1540,20 @@ rclone purge remote:path [flags] -h, --help help for purge ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1201,9 +1571,20 @@ rclone mkdir remote:path [flags] -h, --help help for mkdir ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1232,9 +1613,20 @@ rclone rmdir remote:path [flags] -h, --help help for rmdir ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1308,9 +1700,56 @@ rclone check source:path dest:path [flags] --one-way Check one way only, source files must exist on remote ``` + +## Check Options + +Flags used for `rclone check`. + +``` + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1366,9 +1805,48 @@ rclone ls remote:path [flags] -h, --help help for ls ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1435,9 +1913,48 @@ rclone lsd remote:path [flags] -R, --recursive Recurse into the listing ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1493,9 +2010,48 @@ rclone lsl remote:path [flags] -h, --help help for lsl ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1538,9 +2094,48 @@ rclone md5sum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1586,9 +2181,48 @@ rclone sha1sum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1629,9 +2263,48 @@ rclone size remote:path [flags] --json Format output as JSON ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1691,9 +2364,10 @@ rclone version [flags] -h, --help help for version ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1718,9 +2392,20 @@ rclone cleanup remote:path [flags] -h, --help help for cleanup ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1850,9 +2535,20 @@ rclone dedupe [mode] remote:path [flags] -h, --help help for dedupe ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1922,9 +2618,10 @@ rclone about remote: [flags] --json Format output as JSON ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1956,9 +2653,10 @@ rclone authorize [flags] --template string The path to a custom Go template for generating HTML responses ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2008,9 +2706,20 @@ rclone backend remote:path [opts] [flags] -o, --option stringArray Option in the form name=value or name ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2040,22 +2749,102 @@ rclone bisync remote1:path1 remote2:path2 [flags] ## Options ``` - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --localtime Use local time in listings (default: UTC) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. + --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) +``` + + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2114,9 +2903,48 @@ rclone cat remote:path [flags] --tail int Only print the last N characters ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2180,9 +3008,48 @@ rclone checksum sumfile src:path [flags] --one-way Check one way only, source files must exist on remote ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2203,13 +3070,15 @@ Run with `--help` to list the supported shells. -h, --help help for completion ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone completion bash](https://rclone.org/commands/rclone_completion_bash/) - Output bash completion script for rclone. * [rclone completion fish](https://rclone.org/commands/rclone_completion_fish/) - Output fish completion script for rclone. +* [rclone completion powershell](https://rclone.org/commands/rclone_completion_powershell/) - Output powershell completion script for rclone. * [rclone completion zsh](https://rclone.org/commands/rclone_completion_zsh/) - Output zsh completion script for rclone. # rclone completion bash @@ -2247,9 +3116,10 @@ rclone completion bash [output_file] [flags] -h, --help help for bash ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. @@ -2288,44 +3158,48 @@ rclone completion fish [output_file] [flags] -h, --help help for fish ``` -See the [global flags page](https://rclone.org/flags/) for global options not listed here. - -## SEE ALSO - -* [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. - -# rclone completion powershell - -Generate the autocompletion script for powershell - -# Synopsis - -Generate the autocompletion script for powershell. - -To load completions in your current shell session: - - rclone completion powershell | Out-String | Invoke-Expression - -To load completions for every new session, add the output of the above command -to your powershell profile. - - -``` -rclone completion powershell [flags] -``` - -# Options - -``` - -h, --help help for powershell - --no-descriptions disable completion descriptions -``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. # SEE ALSO -* [rclone completion](https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. + +# rclone completion powershell + +Output powershell completion script for rclone. + +## Synopsis + + +Generate the autocompletion script for powershell. + +To load completions in your current shell session: + + rclone completion powershell | Out-String | Invoke-Expression + +To load completions for every new session, add the output of the above command +to your powershell profile. + +If output_file is "-" or missing, then the output will be written to stdout. + + +``` +rclone completion powershell [output_file] [flags] +``` + +## Options + +``` + -h, --help help for powershell +``` + + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +# SEE ALSO + +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. # rclone completion zsh @@ -2362,9 +3236,10 @@ rclone completion zsh [output_file] [flags] -h, --help help for zsh ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. @@ -2494,9 +3369,10 @@ rclone config create name type [key value]* [flags] --state string State - use with --continue ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2514,9 +3390,10 @@ rclone config delete name [flags] -h, --help help for delete ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2544,9 +3421,10 @@ rclone config disconnect remote: [flags] -h, --help help for disconnect ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2564,9 +3442,10 @@ rclone config dump [flags] -h, --help help for dump ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2574,7 +3453,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Enter an interactive configuration session. -# Synopsis +## Synopsis Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a @@ -2585,12 +3464,13 @@ password to protect your configuration. rclone config edit [flags] ``` -# Options +## Options ``` -h, --help help for edit ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. # SEE ALSO @@ -2611,9 +3491,10 @@ rclone config file [flags] -h, --help help for file ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2647,9 +3528,10 @@ rclone config password name [key value]+ [flags] -h, --help help for password ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2667,9 +3549,10 @@ rclone config paths [flags] -h, --help help for paths ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2687,9 +3570,10 @@ rclone config providers [flags] -h, --help help for providers ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2717,9 +3601,45 @@ rclone config reconnect remote: [flags] -h, --help help for reconnect ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO + +* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + +# rclone config redacted + +Print redacted (decrypted) config file, or the redacted config for a single remote. + +## Synopsis + +This prints a redacted copy of the config file, either the +whole config file or for a given remote. + +The config file will be redacted by replacing all passwords and other +sensitive info with XXX. + +This makes the config file suitable for posting online for support. + +It should be double checked before posting as the redaction may not be perfect. + + + +``` +rclone config redacted [] [flags] +``` + +## Options + +``` + -h, --help help for redacted +``` + + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2737,9 +3657,10 @@ rclone config show [] [flags] -h, --help help for show ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2757,9 +3678,10 @@ rclone config touch [flags] -h, --help help for touch ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2889,9 +3811,10 @@ rclone config update name [key value]+ [flags] --state string State - use with --continue ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2917,9 +3840,10 @@ rclone config userinfo remote: [flags] --json Format output as JSON ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -2969,9 +3893,95 @@ rclone copyto source:path dest:path [flags] -h, --help help for copyto ``` + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3013,9 +4023,20 @@ rclone copyurl https://example.com dest:path [flags] --stdout Write the output to stdout rather than a file ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3091,9 +4112,56 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --one-way Check one way only, source files must exist on remote ``` + +## Check Options + +Flags used for `rclone check`. + +``` + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3130,9 +4198,10 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --reverse Reverse cryptdecode, encrypts filenames ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3158,9 +4227,20 @@ rclone deletefile remote:path [flags] -h, --help help for deletefile ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3334,9 +4414,10 @@ rclone gendocs output_directory [flags] -h, --help help for gendocs ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3399,9 +4480,48 @@ rclone hashsum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3446,9 +4566,10 @@ rclone link remote:path [flags] --unlink Remove existing public link to file/folder ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3475,9 +4596,10 @@ rclone listremotes [flags] --long Show the type as well as names ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3626,9 +4748,48 @@ rclone lsf remote:path [flags] -s, --separator string Separator for the items in the format (default ";") ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3755,9 +4916,48 @@ rclone lsjson remote:path [flags] --stat Just return the info for the pointed to file ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4297,12 +5497,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -4319,14 +5520,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -4592,6 +5794,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -4608,9 +5811,39 @@ rclone mount remote:path /path/to/mountpoint [flags] --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4663,9 +5896,95 @@ rclone moveto source:path dest:path [flags] -h, --help help for moveto ``` + +## Copy Options + +Flags for anything which can Copy a file. + +``` + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4706,6 +6025,7 @@ press '?' to toggle the help on and off. The supported keys are: y copy current path to clipboard Y display current path ^L refresh screen (fix screen corruption) + r recalculate file sizes ? to toggle help on and off q/ESC/^c to quit @@ -4746,9 +6066,48 @@ rclone ncdu remote:path [flags] -h, --help help for ncdu ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4792,9 +6151,10 @@ rclone obscure password [flags] -h, --help help for obscure ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4873,9 +6233,10 @@ rclone rc commands parameter [flags] --user string Username to use to rclone remote control ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4928,9 +6289,20 @@ rclone rcat remote:path [flags] --size int File size hint to preallocate (default -1) ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5061,9 +6433,45 @@ rclone rcd * [flags] -h, --help help for rcd ``` + +## RC Options + +Flags to control the Remote Control API. + +``` + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") + --rc-no-auth Don't require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default "dlPL2MqE") + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don't open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5087,7 +6495,10 @@ empty directories in. For example the [delete](https://rclone.org/commands/rclon command will delete files but leave the directory structure (unless used with option `--rmdirs`). -To delete a path and any objects in it, use [purge](https://rclone.org/commands/rclone_purge/) +This will delete `--checkers` directories concurrently so +if you have thousands of empty directories consider increasing this number. + +To delete a path and any objects in it, use the [purge](https://rclone.org/commands/rclone_purge/) command. @@ -5102,9 +6513,20 @@ rclone rmdirs remote:path [flags] --leave-root Do not remove root directory if empty ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5115,9 +6537,10 @@ Update the rclone binary. ## Synopsis -This command downloads the latest release of rclone and replaces -the currently running binary. The download is verified with a hashsum -and cryptographically signed signature. +This command downloads the latest release of rclone and replaces the +currently running binary. The download is verified with a hashsum and +cryptographically signed signature; see [the release signing +docs](https://rclone.org/release_signing/) for details. If used without flags (or with implied `--stable` flag), this command will install the latest stable release. However, some issues may be fixed @@ -5150,7 +6573,7 @@ your OS) to update these too. This command with the default `--package zip` will update only the rclone executable so the local manual may become inaccurate after it. -The `rclone mount` command (https://rclone.org/commands/rclone_mount/) may +The [rclone mount](https://rclone.org/commands/rclone_mount/) command may or may not support extended FUSE options depending on the build and OS. `selfupdate` will refuse to update if the capability would be discarded. @@ -5179,9 +6602,10 @@ rclone selfupdate [flags] --version string Install the given rclone version (default: latest) ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5209,9 +6633,10 @@ rclone serve [opts] [flags] -h, --help help for serve ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone serve dlna](https://rclone.org/commands/rclone_serve_dlna/) - Serve remote:path over DLNA @@ -5326,12 +6751,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -5348,14 +6774,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -5608,6 +7035,7 @@ rclone serve dlna remote:path [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -5622,9 +7050,39 @@ rclone serve dlna remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -5748,12 +7206,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -5770,14 +7229,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -6048,6 +7508,7 @@ rclone serve docker [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -6064,9 +7525,39 @@ rclone serve docker [flags] --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -6171,12 +7662,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -6193,14 +7685,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -6537,6 +8030,7 @@ rclone serve ftp remote:path [flags] --user string User name for authentication (default "anonymous") --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -6551,9 +8045,39 @@ rclone serve ftp remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -6748,12 +8272,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -6770,14 +8295,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -7093,6 +8619,7 @@ rclone serve http remote:path [flags] ``` --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -7122,6 +8649,7 @@ rclone serve http remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -7136,9 +8664,39 @@ rclone serve http remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -7313,6 +8871,7 @@ rclone serve restic remote:path [flags] ``` --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --append-only Disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root --cache-objects Cache listed objects (default true) @@ -7333,9 +8892,10 @@ rclone serve restic remote:path [flags] --user string User name for authentication ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -7472,12 +9032,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -7494,14 +9055,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -7838,6 +9400,7 @@ rclone serve sftp remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -7852,9 +9415,39 @@ rclone serve sftp remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -8078,12 +9671,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -8100,14 +9694,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -8423,6 +10018,7 @@ rclone serve webdav remote:path [flags] ``` --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -8454,6 +10050,7 @@ rclone serve webdav remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -8468,9 +10065,39 @@ rclone serve webdav remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -8514,9 +10141,10 @@ rclone settier tier remote:path [flags] -h, --help help for settier ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -8544,9 +10172,10 @@ so reading their documentation first is recommended. -h, --help help for test ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone test changenotify](https://rclone.org/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in. @@ -8571,9 +10200,10 @@ rclone test changenotify remote: [flags] --poll-interval Duration Time to wait between polling for changes (default 10s) ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8600,9 +10230,10 @@ rclone test histogram [remote:path] [flags] -h, --help help for histogram ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8628,6 +10259,7 @@ rclone test info [remote:path]+ [flags] ``` --all Run all tests + --check-base32768 Check can store all possible base32768 characters --check-control Check control characters --check-length Check max filename length --check-normalization Check UTF-8 Normalization @@ -8637,9 +10269,10 @@ rclone test info [remote:path]+ [flags] --write-json string Write results to file ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8663,9 +10296,10 @@ rclone test makefile []+ [flags] --zero Fill files with ASCII 0x00 ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8696,9 +10330,10 @@ rclone test makefiles [flags] --zero Fill files with ASCII 0x00 ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8716,9 +10351,10 @@ rclone test memory remote:path [flags] -h, --help help for memory ``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command @@ -8764,9 +10400,58 @@ rclone touch remote:path [flags] -t, --timestamp string Use specified time instead of the current time of day ``` + +## Important Options + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -8833,9 +10518,48 @@ rclone tree remote:path [flags] --version Sort files alphanumerically by version ``` + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +## Listing Options + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + See the [global flags page](https://rclone.org/flags/) for global options not listed here. -## SEE ALSO +# SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -9048,6 +10772,10 @@ possible to write in all of them. This is mostly a problem on Windows, where the console traditionally uses a non-Unicode character set - defined by the so-called "code page". +Do not use single character names on Windows as it creates ambiguity with Windows +drives' names, e.g.: remote called `C` is indistinguishable from `C` drive. Rclone +will always assume that single letter name refers to a drive. + Quoting and the shell --------------------- @@ -9338,6 +11066,9 @@ IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. +You can use `--bind 0.0.0.0` to force rclone to use IPv4 addresses and +`--bind ::0` to force rclone to use IPv6 addresses. + ### --bwlimit=BANDWIDTH_SPEC ### This option controls the bandwidth limit. For example @@ -10151,14 +11882,14 @@ what will happen. ### --max-duration=TIME ### -Rclone will stop scheduling new transfers when it has run for the +Rclone will stop transferring when it has run for the duration specified. - Defaults to off. -When the limit is reached any existing transfers will complete. +When the limit is reached all transfers will stop immediately. +Use `--cutoff-mode` to modify this behaviour. -Rclone won't exit with an error if the transfer limit is reached. +Rclone will exit with exit code 10 if the duration limit is reached. ### --max-transfer=SIZE ### @@ -10166,9 +11897,24 @@ Rclone will stop transferring when it has reached the size specified. Defaults to off. When the limit is reached all transfers will stop immediately. +Use `--cutoff-mode` to modify this behaviour. Rclone will exit with exit code 8 if the transfer limit is reached. +### --cutoff-mode=hard|soft|cautious ### + +This modifies the behavior of `--max-transfer` and `--max-duration` +Defaults to `--cutoff-mode=hard`. + +Specifying `--cutoff-mode=hard` will stop transferring immediately +when Rclone reaches the limit. + +Specifying `--cutoff-mode=soft` will stop starting new transfers +when Rclone reaches the limit. + +Specifying `--cutoff-mode=cautious` will try to prevent Rclone +from reaching the limit. Only applicable for `--max-transfer` + ## -M, --metadata Setting this flag enables rclone to copy the metadata from the source @@ -10181,20 +11927,6 @@ Add metadata `key` = `value` when uploading. This can be repeated as many times as required. See the [#metadata](metadata section) for more info. -### --cutoff-mode=hard|soft|cautious ### - -This modifies the behavior of `--max-transfer` -Defaults to `--cutoff-mode=hard`. - -Specifying `--cutoff-mode=hard` will stop transferring immediately -when Rclone reaches the limit. - -Specifying `--cutoff-mode=soft` will stop starting new transfers -when Rclone reaches the limit. - -Specifying `--cutoff-mode=cautious` will try to prevent Rclone -from reaching the limit. - ### --modify-window=TIME ### When checking whether a file has been modified, this is the maximum @@ -10210,12 +11942,12 @@ This command line flag allows you to override that computed default. ### --multi-thread-write-buffer-size=SIZE ### -When downloading with multiple threads, rclone will buffer SIZE bytes in -memory before writing to disk for each thread. +When transferring with multiple threads, rclone will buffer SIZE bytes +in memory before writing to disk for each thread. This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so -if you see downloads being limited by disk write speed, you might want +if you see transfers being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful. @@ -10227,58 +11959,66 @@ As a final hint, size is not the only factor: block size (or similar concept) can have an impact. In one case, we observed that exact multiples of 16k performed much better than other values. -### --multi-thread-cutoff=SIZE ### +### --multi-thread-chunk-size=SizeSuffix ### -When downloading files to the local backend above this size, rclone -will use multiple threads to download the file (default 250M). +Normally the chunk size for multi thread transfers is set by the backend. +However some backends such as `local` and `smb` (which implement `OpenWriterAt` +but not `OpenChunkWriter`) don't have a natural chunk size. -Rclone preallocates the file (using `fallocate(FALLOC_FL_KEEP_SIZE)` -on unix or `NTSetInformationFile` on Windows both of which takes no -time) then each thread writes directly into the file at the correct -place. This means that rclone won't create fragmented or sparse files -and there won't be any assembly time at the end of the transfer. +In this case the value of this option is used (default 64Mi). -The number of threads used to download is controlled by +### --multi-thread-cutoff=SIZE {#multi-thread-cutoff} + +When transferring files above SIZE to capable backends, rclone will +use multiple threads to transfer the file (default 256M). + +Capable backends are marked in the +[overview](https://rclone.org/overview/#optional-features) as `MultithreadUpload`. (They +need to implement either the `OpenWriterAt` or `OpenChunkedWriter` +internal interfaces). These include include, `local`, `s3`, +`azureblob`, `b2`, `oracleobjectstorage` and `smb` at the time of +writing. + +On the local disk, rclone preallocates the file (using +`fallocate(FALLOC_FL_KEEP_SIZE)` on unix or `NTSetInformationFile` on +Windows both of which takes no time) then each thread writes directly +into the file at the correct place. This means that rclone won't +create fragmented or sparse files and there won't be any assembly time +at the end of the transfer. + +The number of threads used to transfer is controlled by `--multi-thread-streams`. Use `-vv` if you wish to see info about the threads. This will work with the `sync`/`copy`/`move` commands and friends -`copyto`/`moveto`. Multi thread downloads will be used with `rclone +`copyto`/`moveto`. Multi thread transfers will be used with `rclone mount` and `rclone serve` if `--vfs-cache-mode` is set to `writes` or above. -**NB** that this **only** works for a local destination but will work -with any source. +**NB** that this **only** works with supported backends as the +destination but will work with any backend as the source. -**NB** that multi thread copies are disabled for local to local copies +**NB** that multi-thread copies are disabled for local to local copies as they are faster without unless `--multi-thread-streams` is set explicitly. -**NB** on Windows using multi-thread downloads will cause the -resulting files to be [sparse](https://en.wikipedia.org/wiki/Sparse_file). +**NB** on Windows using multi-thread transfers to the local disk will +cause the resulting files to be [sparse](https://en.wikipedia.org/wiki/Sparse_file). Use `--local-no-sparse` to disable sparse files (which may cause long -delays at the start of downloads) or disable multi-thread downloads +delays at the start of transfers) or disable multi-thread transfers with `--multi-thread-streams 0` ### --multi-thread-streams=N ### -When using multi thread downloads (see above `--multi-thread-cutoff`) -this sets the maximum number of streams to use. Set to `0` to disable -multi thread downloads (Default 4). +When using multi thread transfers (see above `--multi-thread-cutoff`) +this sets the number of streams to use. Set to `0` to disable multi +thread transfers (Default 4). -Exactly how many streams rclone uses for the download depends on the -size of the file. To calculate the number of download streams Rclone -divides the size of the file by the `--multi-thread-cutoff` and rounds -up, up to the maximum set with `--multi-thread-streams`. - -So if `--multi-thread-cutoff 250M` and `--multi-thread-streams 4` are -in effect (the defaults): - -- 0..250 MiB files will be downloaded with 1 stream -- 250..500 MiB files will be downloaded with 2 streams -- 500..750 MiB files will be downloaded with 3 streams -- 750+ MiB files will be downloaded with 4 streams +If the backend has a `--backend-upload-concurrency` setting (eg +`--s3-upload-concurrency`) then this setting will be used as the +number of transfers instead if it is larger than the value of +`--multi-thread-streams` or `--multi-thread-streams` isn't set. ### --no-check-dest ### @@ -11258,6 +12998,7 @@ it will log a high priority message if the retry was successful. * `7` - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors) * `8` - Transfer exceeded - limit set by --max-transfer reached * `9` - Operation successful, but no files transferred + * `10` - Duration exceeded - limit set by --max-duration reached Environment Variables --------------------- @@ -11299,6 +13040,9 @@ for each backend. To find the name of the environment variable, you need to set, take `RCLONE_CONFIG_` + name of remote + `_` + name of config file option and make it all uppercase. +Note one implication here is the remote's name must be +convertible into a valid environment variable name, +so it can only contain letters, digits, or the `_` (underscore) character. For example, to configure an S3 remote named `mys3:` without a config file (using unix ways of setting environment variables): @@ -11512,7 +13256,7 @@ E.g. `rclone copy "remote:dir*.jpg" /path/to/dir` does not have a filter effect. `rclone copy remote:dir /path/to/dir --include "*.jpg"` does. **Important** Avoid mixing any two of `--include...`, `--exclude...` or -`--filter...` flags in an rclone command. The results may not be what +`--filter...` flags in an rclone command. The results might not be what you expect. Instead use a `--filter...` flag. ## Patterns for matching path/file names @@ -11573,7 +13317,7 @@ separator or the beginning of the path/file. - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" -The top level of the remote may not be the top level of the drive. +The top level of the remote might not be the top level of the drive. E.g. for a Microsoft Windows local directory structure @@ -11852,7 +13596,7 @@ all files on `remote:` excluding those in root directory `dir` and sub directories. E.g. on Microsoft Windows `rclone ls remote: --exclude "*\[{JP,KR,HK}\]*"` -lists the files in `remote:` with `[JP]` or `[KR]` or `[HK]` in +lists the files in `remote:` without `[JP]` or `[KR]` or `[HK]` in their name. Quotes prevent the shell from interpreting the `\` characters.`\` characters escape the `[` and `]` so an rclone filter treats them literally rather than as a character-range. The `{` and `}` @@ -12692,7 +14436,7 @@ parameter, you would pass this parameter in your JSON blob. If using `rclone rc` this could be passed as - rclone rc operations/sync ... _config='{"CheckSum": true}' + rclone rc sync/sync ... _config='{"CheckSum": true}' Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables. @@ -13120,6 +14864,28 @@ OR **Authentication is required for this call.** +### core/du: Returns disk usage of a locally attached disk. {#core-du} + +This returns the disk usage for the local directory passed in as dir. + +If the directory is not passed in, it defaults to the directory +pointed to by --cache-dir. + +- dir - string (optional) + +Returns: + +``` +{ + "dir": "/", + "info": { + "Available": 361769115648, + "Free": 361785892864, + "Total": 982141468672 + } +} +``` + ### core/gc: Runs a garbage collection. {#core-gc} This tells the go runtime to do a garbage collection run. It isn't @@ -13199,6 +14965,10 @@ Returns the following values: "lastError": last error string, "renames" : number of files renamed, "retryError": boolean showing whether there has been at least one non-NoRetryError, + "serverSideCopies": number of server side copies done, + "serverSideCopyBytes": number bytes server side copied, + "serverSideMoves": number of server side moves done, + "serverSideMoveBytes": number bytes server side moved, "speed": average speed in bytes per second since start of the group, "totalBytes": total number of bytes in the group, "totalChecks": total number of checks in the group, @@ -13400,7 +15170,8 @@ Parameters: None. Results: -- jobids - array of integer job ids. +- executeId - string id of rclone executing (change after restart) +- jobids - array of integer job ids (starting at 1 on each restart) ### job/status: Reads the status of the job ID {#job-status} @@ -13803,6 +15574,27 @@ See the [rmdirs](https://rclone.org/commands/rclone_rmdirs/) command for more in **Authentication is required for this call.** +### operations/settier: Changes storage tier or class on all files in the path {#operations-settier} + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" + +See the [settier](https://rclone.org/commands/rclone_settier/) command for more information on the above. + +**Authentication is required for this call.** + +### operations/settierfile: Changes storage tier or class on the single file pointed to {#operations-settierfile} + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" + +See the [settierfile](https://rclone.org/commands/rclone_settierfile/) command for more information on the above. + +**Authentication is required for this call.** + ### operations/size: Count the number of bytes and files in remote {#operations-size} This takes the following parameters: @@ -14038,11 +15830,16 @@ This takes the following parameters - checkFilename - file name for checkAccess (default: RCLONE_TEST) - maxDelete - abort sync if percentage of deleted files is above this threshold (default: 50) -- force - maxDelete safety check and run the sync +- force - Bypass maxDelete safety check and run the sync - checkSync - `true` by default, `false` disables comparison of final listings, `only` will skip sync, only compare listings from the last run +- createEmptySrcDirs - Sync creation and deletion of empty directories. + (Not compatible with --remove-empty-dirs) - removeEmptyDirs - remove empty directories at the final cleanup step - filtersFile - read filtering patterns from a file +- ignoreListingChecksum - Do not use checksums for listings +- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. + Use at your own risk! - workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) - noCleanup - retain working files @@ -14472,7 +16269,9 @@ Here is an overview of the major features of each cloud storage system. | PikPak | MD5 | R | No | No | R | - | | premiumize.me | - | - | Yes | No | R | - | | put.io | CRC-32 | R/W | No | Yes | R | - | +| Proton Drive | SHA1 | R/W | No | No | R | - | | QingStor | MD5 | - ⁹ | No | No | R/W | - | +| Quatrix by Maytech | - | R/W | No | No | - | - | | Seafile | - | - | No | No | - | - | | SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - | | Sia | - | - | No | No | - | - | @@ -14494,7 +16293,7 @@ This is an SHA256 sum of all the 4 MiB block SHA256s. ² SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. -³ WebDAV supports hashes when used with Fastmail Files. Owncloud and Nextcloud only. +³ WebDAV supports hashes when used with Fastmail Files, Owncloud and Nextcloud only. ⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and Nextcloud only. @@ -14890,51 +16689,53 @@ See [the metadata docs](https://rclone.org/docs/#metadata) for more info. All rclone remotes support a base command set. Other features depend upon backend-specific capabilities. -| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | -| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:|:--------:| -| 1Fichier | No | Yes | Yes | No | No | No | No | Yes | No | Yes | -| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | Yes | -| Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | Yes | -| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | -| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | -| Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | Yes | Yes | Yes | -| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | -| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | Yes | -| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | Yes | -| FTP | No | No | Yes | Yes | No | No | Yes | No | No | Yes | -| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | -| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| Google Photos | No | No | No | No | No | No | No | No | No | No | -| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | -| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes | -| HTTP | No | No | No | No | No | No | No | No | No | Yes | -| Internet Archive | No | Yes | No | No | Yes | Yes | No | Yes | Yes | No | -| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | -| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | Yes | -| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| Mega | Yes | No | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | -| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | -| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | -| OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No | Yes | No | -| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | No | No | No | -| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| premiumize.me | Yes | No | Yes | Yes | No | No | No | Yes | Yes | Yes | -| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | Yes | Yes | -| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | -| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| SFTP | No | No | Yes | Yes | No | No | Yes | No | Yes | Yes | -| Sia | No | No | No | No | No | No | Yes | No | No | Yes | -| SMB | No | No | Yes | Yes | No | No | Yes | No | No | Yes | -| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | -| Storj | Yes ☨ | Yes | Yes | No | No | Yes | Yes | Yes | No | No | -| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | -| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | Yes | Yes | -| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | -| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | Yes | Yes | -| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | +| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | MultithreadUpload | LinkSharing | About | EmptyDir | +| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------------|:------------:|:-----:|:--------:| +| 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes | +| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes | +| Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | No | Yes | +| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | +| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | +| Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | No | Yes | Yes | Yes | +| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes | +| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | +| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | +| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes | +| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | No | +| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | +| Google Photos | No | No | No | No | No | No | No | No | No | No | No | +| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes | +| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes | +| HTTP | No | No | No | No | No | No | No | No | No | No | Yes | +| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No | +| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | +| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | +| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | +| Mega | Yes | No | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | +| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | No | +| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No | +| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | +| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes | +| OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No | No | Yes | No | +| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | No | No | No | No | +| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | +| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | +| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes | +| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | No | Yes | Yes | +| Proton Drive | Yes | No | Yes | Yes | Yes | No | No | No | No | Yes | Yes | +| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | No | +| Quatrix by Maytech | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes | +| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | +| SFTP | No | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes | +| Sia | No | No | No | No | No | No | Yes | No | No | No | Yes | +| SMB | No | No | Yes | Yes | No | No | Yes | Yes | No | No | Yes | +| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes | +| Storj | Yes ☨ | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No | +| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No | +| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | No | Yes | Yes | +| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | +| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes | +| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes | ### Purge ### @@ -14998,6 +16799,12 @@ Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. `rclone rcat`. +### MultithreadUpload ### + +Some remotes allow transfers to the remote to be sent as chunks in +parallel. If this is supported then rclone will use multi-thread +copying to transfer files much faster. + ### LinkSharing ### Sets the necessary permissions on a file or folder and prints a link @@ -15026,182 +16833,292 @@ The remote supports empty directories. See [Limitations](https://rclone.org/bugs # Global Flags This describes the global flags available to every rclone command -split into two groups, non backend and backend flags. +split into groups. -## Non Backend Flags -These flags are available for every command. +## Copy + +Flags for anything which can Copy a file. ``` - --ask-password Allow prompt for password for encrypted configuration (default true) - --auto-confirm If enabled, do not request console confirmation - --backup-dir string Make backups into hierarchy based in DIR - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name - --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) - --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --ca-cert stringArray CA certificate used to verify servers - --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") --check-first Do all the checks before starting transfers - --checkers int Number of checkers to run in parallel (default 8) - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size - --client-cert string Client SSL certificate (PEM) for mutual TLS auth - --client-key string Client SSL private key (PEM) for mutual TLS auth - --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison - --config string Config file (default "$HOME/.config/rclone/rclone.conf") - --contimeout Duration Connect timeout (default 1m0s) --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination - --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") - --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features (use --disable help to see a list) - --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. - --disable-http2 Disable HTTP/2 in the global transport - -n, --dry-run Do a trial run with no permanent changes - --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 - --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP headers - may contain sensitive info - --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) - --exclude-if-present stringArray Exclude directories if filename is present - --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) - --fast-list Use recursive list if available; uses more memory but fewer transactions - --files-from stringArray Read list of source-file names from file (use - to read from stdin) - --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) - -f, --filter stringArray Add a file filtering rule - --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) - --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) - --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) - --header stringArray Set HTTP header for all transactions - --header-download stringArray Set HTTP header for download transactions - --header-upload stringArray Set HTTP header for upload transactions - --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi - --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums - --ignore-errors Delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files, fail if existing files have been modified - --include stringArray Include files matching pattern - --include-from stringArray Read file include patterns from file (use - to read from stdin) --inplace Download directly to destination file instead of atomic download to temp/rename - -i, --interactive Enable interactive mode - --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --log-systemd Activate systemd integration for the logger - --low-level-retries int Number of low level retries to do (default 10) - --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) - --max-depth int If set limits the recursion depth to this (default -1) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) - --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) - --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) - --memprofile string Write memory profile to file -M, --metadata If set, preserve metadata when copying objects - --metadata-exclude stringArray Exclude metadatas matching pattern - --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) - --metadata-filter stringArray Add a metadata filtering rule - --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) - --metadata-include stringArray Include metadatas matching pattern - --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) - --metadata-set stringArray Add metadata key=value when uploading - --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) - --no-check-certificate Do not verify the server SSL certificate (insecure) --no-check-dest Don't check the destination, copy regardless - --no-console Hide console window (supported on Windows only) - --no-gzip-encoding Don't set Accept-Encoding: gzip --no-traverse Don't traverse destination file system on copy - --no-unicode-normalization Don't normalize unicode characters in filenames --no-update-modtime Don't update destination mod-time if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' - --password-command SpaceSepList Command for supplying password for encrypted configuration - -P, --progress Show progress during transfer - --progress-terminal-title Show progress on the terminal title (requires -P/--progress) - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server - --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) - --rc-allow-origin string Set the allowed origin for CORS - --rc-baseurl string Prefix for URLs - leave blank for root - --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-enable-metrics Enable prometheus metrics on /metrics - --rc-files string Path to local files to serve on the HTTP server - --rc-htpasswd string A htpasswd file - if not provided no authentication is done - --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) - --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) - --rc-key string TLS PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") - --rc-no-auth Don't require auth for certain methods - --rc-pass string Password for authentication - --rc-realm string Realm for authentication - --rc-salt string Password hashing salt (default "dlPL2MqE") - --rc-serve Enable the serving of remote objects - --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) - --rc-template string User-specified template - --rc-user string User name for authentication - --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") - --rc-web-gui Launch WebGUI on localhost - --rc-web-gui-force-update Force update to latest version of web gui - --rc-web-gui-no-open-browser Don't open the browser automatically - --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --size-only Skip based on size only, not mod-time or checksum - --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line - --stats-one-line-date Enable --stats-one-line and add current date/time prefix - --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format - --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) - --suffix string Suffix to add to changed files - --suffix-keep-extension Preserve the extension when using --suffix - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON") - --temp-dir string Directory rclone will use for temporary files (default "/tmp") - --timeout Duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this - --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) - --track-renames When synchronizing, track file renames and do a server-side move if possible - --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") - --transfers int Number of file transfers to run in parallel (default 4) -u, --update Skip files that are newer on the destination - --use-cookies Enable session cookiejar - --use-json-log Use json log format - --use-mmap Use mmap allocator (see docs) - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.63.0") - -v, --verbose count Print lots more stuff (repeat for more) ``` -## Backend Flags -These flags are available for every command. They control the backends -and may be set in the config file. +## Sync + +Flags just used for `rclone sync`. + +``` + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") +``` + + +## Important + +Important flags useful for most commands. + +``` + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +``` + + +## Check + +Flags used for `rclone check`. + +``` + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +``` + + +## Networking + +General networking and HTTP stuff. + +``` + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name + --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --ca-cert stringArray CA certificate used to verify servers + --client-cert string Client SSL certificate (PEM) for mutual TLS auth + --client-key string Client SSL private key (PEM) for mutual TLS auth + --contimeout Duration Connect timeout (default 1m0s) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. + --disable-http2 Disable HTTP/2 in the global transport + --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 + --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) + --header stringArray Set HTTP header for all transactions + --header-download stringArray Set HTTP header for download transactions + --header-upload stringArray Set HTTP header for upload transactions + --no-check-certificate Do not verify the server SSL certificate (insecure) + --no-gzip-encoding Don't set Accept-Encoding: gzip + --timeout Duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this + --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) + --use-cookies Enable session cookiejar + --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") +``` + + +## Performance + +Flags helpful for increasing performance. + +``` + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) + --checkers int Number of checkers to run in parallel (default 8) + --transfers int Number of file transfers to run in parallel (default 4) +``` + + +## Config + +General configuration of rclone. + +``` + --ask-password Allow prompt for password for encrypted configuration (default true) + --auto-confirm If enabled, do not request console confirmation + --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") + --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") + --config string Config file (default "$HOME/.config/rclone/rclone.conf") + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --disable string Disable a comma separated list of features (use --disable help to see a list) + -n, --dry-run Do a trial run with no permanent changes + --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts + --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) + --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) + --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi + -i, --interactive Enable interactive mode + --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) + --low-level-retries int Number of low level retries to do (default 10) + --no-console Hide console window (supported on Windows only) + --no-unicode-normalization Don't normalize unicode characters in filenames + --password-command SpaceSepList Command for supplying password for encrypted configuration + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --temp-dir string Directory rclone will use for temporary files (default "/tmp") + --use-mmap Use mmap allocator (see docs) + --use-server-modtime Use server modified time instead of object metadata +``` + + +## Debugging + +Flags for developers. + +``` + --cpuprofile string Write cpu profile to file + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP headers - may contain sensitive info + --memprofile string Write memory profile to file +``` + + +## Filter + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + + +## Listing + +Flags for listing directories. + +``` + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +``` + + +## Logging + +Logging and statistics. + +``` + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --log-systemd Activate systemd integration for the logger + --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) + -P, --progress Show progress during transfer + --progress-terminal-title Show progress on the terminal title (requires -P/--progress) + -q, --quiet Print as little stuff as possible + --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line + --stats-one-line-date Enable --stats-one-line and add current date/time prefix + --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format + --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON") + --use-json-log Use json log format + -v, --verbose count Print lots more stuff (repeat for more) +``` + + +## Metadata + +Flags to control metadata. + +``` + -M, --metadata If set, preserve metadata when copying objects + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --metadata-set stringArray Add metadata key=value when uploading +``` + + +## RC + +Flags to control the Remote Control API. + +``` + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") + --rc-no-auth Don't require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default "dlPL2MqE") + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don't open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui +``` + + +## Backend + +Backend only flags. These can be set in the config file also. ``` --acd-auth-url string Auth server URL @@ -15229,8 +17146,6 @@ and may be set in the config file. --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) - --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any @@ -15256,9 +17171,8 @@ and may be set in the config file. --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key - --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging + --b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings @@ -15270,6 +17184,7 @@ and may be set in the config file. --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) + --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point @@ -15329,6 +17244,7 @@ and may be set in the config file. --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") + --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs @@ -15404,6 +17320,7 @@ and may be set in the config file. --ftp-pass string FTP password (obscured) --ftp-port int FTP port number (default 21) --ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s) + --ftp-socks-proxy string Socks 5 proxy host --ftp-tls Use Implicit FTPS (FTP over TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) --ftp-user string FTP username (default "$USER") @@ -15472,10 +17389,15 @@ and may be set in the config file. --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") --internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) + --jottacloud-auth-url string Auth server URL + --jottacloud-client-id string OAuth Client Id + --jottacloud-client-secret string OAuth Client Secret --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them + --jottacloud-token string OAuth Access Token as a JSON blob + --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) @@ -15496,13 +17418,18 @@ and may be set in the config file. --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) + --mailru-auth-url string Auth server URL --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) + --mailru-client-id string OAuth Client Id + --mailru-client-secret string OAuth Client Secret --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi) + --mailru-token string OAuth Access Token as a JSON blob + --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -15536,6 +17463,7 @@ and may be set in the config file. --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url + --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default "~/.oci/config") @@ -15545,7 +17473,8 @@ and may be set in the config file. --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API - --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery + --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery + --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default "env_auth") @@ -15584,8 +17513,27 @@ and may be set in the config file. --pikpak-trashed-only Only show files that are in the trash --pikpak-use-trash Send files to the trash instead of deleting permanently (default true) --pikpak-user string Pikpak username + --premiumizeme-auth-url string Auth server URL + --premiumizeme-client-id string OAuth Client Id + --premiumizeme-client-secret string OAuth Client Secret --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --premiumizeme-token string OAuth Access Token as a JSON blob + --premiumizeme-token-url string Token server url + --protondrive-2fa string The 2FA code + --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) + --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) + --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) + --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-password string The password of your proton account (obscured) + --protondrive-replace-existing-draft Create a new revision when filename conflict is detected + --protondrive-username string The username of your proton account + --putio-auth-url string Auth server URL + --putio-client-id string OAuth Client Id + --putio-client-secret string OAuth Client Secret --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --putio-token string OAuth Access Token as a JSON blob + --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) @@ -15596,6 +17544,13 @@ and may be set in the config file. --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to + --quatrix-api-key string API key for accessing Quatrix account + --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") + --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --quatrix-hard-delete Delete files permanently rather than putting them into the trash + --quatrix-host string Host name of Quatrix account + --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) + --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets @@ -15616,8 +17571,6 @@ and may be set in the config file. --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) - --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it --s3-no-head If set, don't HEAD uploaded objects to check integrity @@ -15683,14 +17636,21 @@ and may be set in the config file. --sftp-sha1sum-command string The command used to read sha1 hashes --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files + --sftp-socks-proxy string Socks 5 proxy host + --sftp-ssh SpaceSepList Path and arguments to external ssh binary --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods --sftp-user string SSH username (default "$USER") + --sharefile-auth-url string Auth server URL --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) + --sharefile-client-id string OAuth Client Id + --sharefile-client-secret string OAuth Client Secret --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder + --sharefile-token string OAuth Access Token as a JSON blob + --sharefile-token-url string Token server url --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") @@ -16424,10 +18384,16 @@ Optional Flags: If exceeded, the bisync run will abort. (default: 50%) --force Bypass `--max-delete` safety check and run the sync. Consider using with `--verbose` + --create-empty-src-dirs Sync creation and deletion of empty directories. + (Not compatible with --remove-empty-dirs) --remove-empty-dirs Remove empty directories at the final cleanup step. -1, --resync Performs the resync run. Warning: Path1 files may overwrite Path2 versions. Consider using `--verbose` or `--dry-run` first. + --ignore-listing-checksum Do not use checksums for listings + (add --ignore-checksum to additionally skip post-copy checksum checks) + --resilient Allow future runs to retry after certain less-serious errors, + instead of requiring --resync. Use at your own risk! --localtime Use local time in listings (default: UTC) --no-cleanup Retain working files (useful for troubleshooting and testing). --workdir PATH Use custom working directory (useful for testing). @@ -16454,7 +18420,7 @@ Cloud references are distinguished by having a `:` in the argument (see [Windows support](#windows) below). Path1 and Path2 are treated equally, in that neither has priority for -file changes, and access efficiency does not change whether a remote +file changes (except during [`--resync`](#resync)), and access efficiency does not change whether a remote is on Path1 or Path2. The listings in bisync working directory (default: `~/.cache/rclone/bisync`) @@ -16463,8 +18429,8 @@ to individual directories within the tree may be set up, e.g.: `path_to_local_tree..dropbox_subdir.lst`. Any empty directories after the sync on both the Path1 and Path2 -filesystems are not deleted by default. If the `--remove-empty-dirs` -flag is specified, then both paths will have any empty directories purged +filesystems are not deleted by default, unless `--create-empty-src-dirs` is specified. +If the `--remove-empty-dirs` flag is specified, then both paths will have ALL empty directories purged as the last step in the process. ## Command-line flags @@ -16473,15 +18439,31 @@ as the last step in the process. This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will -be copied to Path1, and the process will then sync the Path1 tree to Path2. +be copied to Path1, and the process will then copy the Path1 tree to Path2. -The base directories on the both Path1 and Path2 filesystems must exist +The `--resync` sequence is roughly equivalent to: +``` +rclone copy Path2 Path1 --ignore-existing +rclone copy Path1 Path2 +``` +Or, if using `--create-empty-src-dirs`: +``` +rclone copy Path2 Path1 --ignore-existing +rclone copy Path1 Path2 --create-empty-src-dirs +rclone copy Path2 Path1 --create-empty-src-dirs +``` + +The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid. -When using `--resync`, a newer version of a file either on Path1 or Path2 -filesystem, will overwrite the file on the other path (only the last version -will be kept). Carefully evaluate deltas using [--dry-run](https://rclone.org/flags/#non-backend-flags). +When using `--resync`, a newer version of a file on the Path2 filesystem +will be overwritten by the Path1 filesystem version. +(Note that this is [NOT entirely symmetrical](https://github.com/rclone/rclone/issues/5681#issuecomment-938761815).) +Carefully evaluate deltas using [--dry-run](https://rclone.org/flags/#non-backend-flags). + +[//]: # (I reverted a recent change in the above paragraph, as it was incorrect. +https://github.com/rclone/rclone/commit/dd72aff98a46c6e20848ac7ae5f7b19d45802493 ) For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync @@ -16498,13 +18480,23 @@ Access check files are an additional safety measure against data loss. bisync will ensure it can find matching `RCLONE_TEST` files in the same places in the Path1 and Path2 filesystems. `RCLONE_TEST` files are not generated automatically. -For `--check-access`to succeed, you must first either: -**A)** Place one or more `RCLONE_TEST` files in the Path1 or Path2 filesystem -and then do either a run without `--check-access` or a [--resync](#resync) to -set matching files on both filesystems, or +For `--check-access` to succeed, you must first either: +**A)** Place one or more `RCLONE_TEST` files in both systems, or **B)** Set `--check-filename` to a filename already in use in various locations -throughout your sync'd fileset. -Time stamps and file contents are not important, just the names and locations. +throughout your sync'd fileset. Recommended methods for **A)** include: +* `rclone touch Path1/RCLONE_TEST` (create a new file) +* `rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST` (copy an existing file) +* `rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include "RCLONE_TEST"` (copy multiple files at once, recursively) +* create the files manually (outside of rclone) +* run `bisync` once *without* `--check-access` to set matching files on both filesystems +will also work, but is not preferred, due to potential for user error +(you are temporarily disabling the safety feature). + +Note that `--check-access` is still enforced on `--resync`, so `bisync --resync --check-access` +will not work as a method of initially setting the files (this is to ensure that bisync can't +[inadvertently circumvent its own safety switch](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should).) + +Time stamps and file contents for `RCLONE_TEST` files are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place `RCLONE_TEST` files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be @@ -16531,7 +18523,7 @@ files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. -To force the sync either set a different delete percentage limit, +To force the sync, either set a different delete percentage limit, e.g. `--max-delete 75` (allows up to 75% deletion), or use `--force` to bypass the check. @@ -16548,17 +18540,17 @@ An [example filters file](#example-filters-file) contains filters for non-allowed files for synching with Dropbox. If you make changes to your filters file then bisync requires a run -with `--resync`. This is a safety feature, which avoids existing files +with `--resync`. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real. -To block this from happening bisync calculates an MD5 hash of the filters file +To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a `.md5` file in the same place as your filters file. -On the next runs with `--filters-file` set, bisync re-calculates the MD5 hash -of the current filters file and compares it to the hash stored in `.md5` file. -If they don't match the run aborts with a critical error and thus forces you +On the next run with `--filters-file` set, bisync re-calculates the MD5 hash +of the current filters file and compares it to the hash stored in the `.md5` file. +If they don't match, the run aborts with a critical error and thus forces you to do a `--resync`, likely avoiding a disaster. #### --check-sync @@ -16578,6 +18570,60 @@ sync run times for very large numbers of files. The check may be run manually with `--check-sync=only`. It runs only the integrity check and terminates without actually synching. +See also: [Concurrent modifications](#concurrent-modifications) + + +#### --ignore-listing-checksum + +By default, bisync will retrieve (or generate) checksums (for backends that support them) +when creating the listings for both paths, and store the checksums in the listing files. +`--ignore-listing-checksum` will disable this behavior, which may speed things up considerably, +especially on backends (such as [local](https://rclone.org/local/)) where hashes must be computed on the fly instead of retrieved. +Please note the following: + +* While checksums are (by default) generated and stored in the listing files, +they are NOT currently used for determining diffs (deltas). +It is anticipated that full checksum support will be added in a future version. +* `--ignore-listing-checksum` is NOT the same as [`--ignore-checksum`](https://rclone.org/docs/#ignore-checksum), +and you may wish to use one or the other, or both. In a nutshell: +`--ignore-listing-checksum` controls whether checksums are considered when scanning for diffs, +while `--ignore-checksum` controls whether checksums are considered during the copy/sync operations that follow, +if there ARE diffs. +* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path +*even when there's no common hash with the other path* +(for example, a [crypt](https://rclone.org/crypt/#modified-time-and-hashes) remote.) +* If both paths support checksums and have a common hash, +AND `--ignore-listing-checksum` was not specified when creating the listings, +`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.) +However, `--check-sync=only` will NOT include checksums if the previous listings +were generated on a run using `--ignore-listing-checksum`. For a more robust integrity check of the current state, +consider using [`check`](commands/rclone_check/) +(or [`cryptcheck`](https://rclone.org/commands/rclone_cryptcheck/), if at least one path is a `crypt` remote.) + +#### --resilient + +***Caution: this is an experimental feature. Use at your own risk!*** + +By default, most errors or interruptions will cause bisync to abort and +require [`--resync`](#resync) to recover. This is a safety feature, +to prevent bisync from running again until a user checks things out. +However, in some cases, bisync can go too far and enforce a lockout when one isn't actually necessary, +like for certain less-serious errors that might resolve themselves on the next run. +When `--resilient` is specified, bisync tries its best to recover and self-correct, +and only requires `--resync` as a last resort when a human's involvement is absolutely necessary. +The intended use case is for running bisync as a background process (such as via scheduled [cron](#cron)). + +When using `--resilient` mode, bisync will still report the error and abort, +however it will not lock out future runs -- allowing the possibility of retrying at the next normally scheduled time, +without requiring a `--resync` first. Examples of such retryable errors include +access test failures, missing listing files, and filter change detections. +These safety features will still prevent the *current* run from proceeding -- +the difference is that if conditions have improved by the time of the *next* run, +that next run will be allowed to proceed. +Certain more serious errors will still enforce a `--resync` lockout, even in `--resilient` mode, to prevent data loss. + +Behavior of `--resilient` may change in a future version. + ## Operation ### Runtime flow details @@ -16621,15 +18667,26 @@ Path1 deleted | File no longer exists on Path1 | File is deleted Type | Description | Result | Implementation --------------------------------|---------------------------------------|------------------------------------|----------------------- -Path1 new AND Path2 new | File is new on Path1 AND new on Path2 | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 -Path2 newer AND Path1 changed | File is newer on Path2 AND also changed (newer/older/size) on Path1 | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 +Path1 new/changed AND Path2 new/changed AND Path1 == Path2 | File is new/changed on Path1 AND new/changed on Path2 AND Path1 version is currently identical to Path2 | No change | None +Path1 new AND Path2 new | File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 +Path2 newer AND Path1 changed | File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2) | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 Path2 newer AND Path1 deleted | File is newer on Path2 AND also deleted on Path1 | Path2 version survives | `rclone copy` Path2 to Path1 Path2 deleted AND Path1 changed | File is deleted on Path2 AND changed (newer/older/size) on Path1 | Path1 version survives |`rclone copy` Path1 to Path2 Path1 deleted AND Path2 changed | File is deleted on Path1 AND changed (newer/older/size) on Path2 | Path2 version survives | `rclone copy` Path2 to Path1 +As of `rclone v1.64`, bisync is now better at detecting *false positive* sync conflicts, +which would previously have resulted in unnecessary renames and duplicates. +Now, when bisync comes to a file that it wants to rename (because it is new/changed on both sides), +it first checks whether the Path1 and Path2 versions are currently *identical* +(using the same underlying function as [`check`](commands/rclone_check/).) +If bisync concludes that the files are identical, it will skip them and move on. +Otherwise, it will create renamed `..Path1` and `..Path2` duplicates, as before. +This behavior also [improves the experience of renaming directories](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories), +as a `--resync` is no longer required, so long as the same change has been made on both sides. + ### All files changed check {#all-files-changed} -if _all_ prior existing files on either of the filesystems have changed +If _all_ prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use `--force` @@ -16666,7 +18723,7 @@ It is recommended to use `--resync --dry-run --verbose` initially and _carefully_ review what changes will be made before running the `--resync` without `--dry-run`. -Most of these events come up due to a error status from an internal call. +Most of these events come up due to an error status from an internal call. On such a critical error the `{...}.path1.lst` and `{...}.path2.lst` listing files are renamed to extension `.lst-err`, which blocks any future bisync runs (since the normal `.lst` files are not found). @@ -16676,6 +18733,8 @@ typically at `${HOME}/.cache/rclone/bisync/` on Linux. Some errors are considered temporary and re-running the bisync is not blocked. The _critical return_ blocks further bisync runs. +See also: [`--resilient`](#resilient) + ### Lock file When bisync is running, a lock file is created in the bisync working directory, @@ -16716,7 +18775,7 @@ It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below. -First release of `rclone bisync` requires that underlying backend supported +First release of `rclone bisync` requires that underlying backend supports the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future `rclone bisync` release. @@ -16731,38 +18790,97 @@ This will be solved in a future release, there is no workaround at the moment. Files that **change during** a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. -The solution is to sync at quiet times or [filter out](#filtering) +The currently recommended solution is to sync at quiet times or [filter out](#filtering) unnecessary directories and files. +As an [alternative approach](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=scans%2C%20to%20avoid-,errors%20if%20files%20changed%20during%20sync,-Given%20the%20number), +consider using `--check-sync=false` (and possibly `--resilient`) to make bisync more forgiving +of filesystems that change during the sync. +Be advised that this may cause bisync to miss events that occur during a bisync run, +so it is a good idea to supplement this with a periodic independent integrity check, +and corrective sync if diffs are found. For example, a possible sequence could look like this: + +1. Normally scheduled bisync run: + +``` +rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient +``` + +2. Periodic independent integrity check (perhaps scheduled nightly or weekly): + +``` +rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt +``` + +3. If diffs are found, you have some choices to correct them. +If one side is more up-to-date and you want to make the other side match it, you could run: + +``` +rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v +``` +(or switch Path1 and Path2 to make Path2 the source-of-truth) + +Or, if neither side is totally up-to-date, you could run a `--resync` to bring them back into agreement +(but remember that this could cause deleted files to re-appear.) + +*Note also that `rclone check` does not currently include empty directories, +so if you want to know if any empty directories are out of sync, +consider alternatively running the above `rclone sync` command with `--dry-run` added. + ### Empty directories -New empty directories on one path are _not_ propagated to the other side. -This is because bisync (and rclone) natively works on files not directories. -The following sequence is a workaround but will not propagate the delete -of an empty directory to the other side: - -``` -rclone bisync PATH1 PATH2 -rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs -rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs -``` +By default, new/deleted empty directories on one path are _not_ propagated to the other side. +This is because bisync (and rclone) natively works on files, not directories. +However, this can be changed with the `--create-empty-src-dirs` flag, which works in +much the same way as in [`sync`](https://rclone.org/commands/rclone_sync/) and [`copy`](https://rclone.org/commands/rclone_copy/). +When used, empty directories created or deleted on one side will also be created or deleted on the other side. +The following should be noted: +* `--create-empty-src-dirs` is not compatible with `--remove-empty-dirs`. Use only one or the other (or neither). +* It is not recommended to switch back and forth between `--create-empty-src-dirs` +and the default (no `--create-empty-src-dirs`) without running `--resync`. +This is because it may appear as though all directories (not just the empty ones) were created/deleted, +when actually you've just toggled between making them visible/invisible to bisync. +It looks scarier than it is, but it's still probably best to stick to one or the other, +and use `--resync` when you need to switch. ### Renamed directories -Renaming a folder on the Path1 side results is deleting all files on +Renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all -files in the new directory name as new. Similarly, renaming a directory on -both sides to the same name will result in creating `..path1` and `..path2` -files on both sides. -Currently the most effective and efficient method of renaming a directory -is to rename it on both sides, then do a `--resync`. +files in the new directory name as new. +Currently, the most effective and efficient method of renaming a directory +is to rename it to the same name on both sides. (As of `rclone v1.64`, +a `--resync` is no longer required after doing so, as bisync will automatically +detect that Path1 and Path2 are in agreement.) + +### `--fast-list` used by default + +Unlike most other rclone commands, bisync uses [`--fast-list`](https://rclone.org/docs/#fast-list) by default, +for backends that support it. In many cases this is desirable, however, +there are some scenarios in which bisync could be faster *without* `--fast-list`, +and there is also a [known issue concerning Google Drive users with many empty directories](https://github.com/rclone/rclone/commit/cbf3d4356135814921382dd3285d859d15d0aa77). +For now, the recommended way to avoid using `--fast-list` is to add `--disable ListR` +to all bisync commands. The default behavior may change in a future version. + +### Overridden Configs + +When rclone detects an overridden config, it adds a suffix like `{ABCDE}` on the fly +to the internal name of the remote. Bisync follows suit by including this suffix in its listing filenames. +However, this suffix does not necessarily persist from run to run, especially if different flags are provided. +So if next time the suffix assigned is `{FGHIJ}`, bisync will get confused, +because it's looking for a listing file with `{FGHIJ}`, when the file it wants has `{ABCDE}`. +As a result, it throws +`Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run` +and refuses to run again until the user runs a `--resync` (unless using `--resilient`). +The best workaround at the moment is to set any backend-specific flags in the [config file](https://rclone.org/commands/rclone_config/) +instead of specifying them with command flags. (You can still override them as needed for other rclone commands.) ### Case sensitivity Synching with **case-insensitive** filesystems, such as Windows or `Box`, can result in file name conflicts. This will be fixed in a future release. -The near term workaround is to make sure that files on both sides +The near-term workaround is to make sure that files on both sides don't have spelling case differences (`Smile.jpg` vs. `smile.jpg`). ## Windows support {#windows} @@ -16825,7 +18943,7 @@ below. - Excluding such dirs first will make rclone operations (much) faster. - Specific files may also be excluded, as with the Dropbox exclusions example below. -2. Decide if its easier (or cleaner) to: +2. Decide if it's easier (or cleaner) to: - Include select directories and therefore _exclude everything else_ -- or -- - Exclude select directories and therefore _include everything else_ 3. Include select directories: @@ -16844,7 +18962,7 @@ below. For example: `-/Desktop/tempfiles/`, or `- /testdir/`. Again, a `**` on the end is not necessary. - Do _not_ add a `- **` in the file. Without this line, everything - will be included that has not be explicitly excluded. + will be included that has not been explicitly excluded. - Disregard step 3. A few rules for the syntax of a filter file expanding on @@ -16988,7 +19106,7 @@ The second has no deltas between local and remote. The `--dry-run` messages may indicate that it would try to delete some files. For example, if a file is new on Path2 and does not exist on Path1 then it would normally be copied to Path1, but with `--dry-run` enabled those -copies don't happen, which leads to the attempted delete on the Path2, +copies don't happen, which leads to the attempted delete on Path2, blocked again by --dry-run: `... Not deleting as --dry-run`. This whole confusing situation is an artifact of the `--dry-run` flag. @@ -16997,14 +19115,14 @@ copied to Path1 then the threatened deletes on Path2 may be disregarded. ### Retries -Rclone has built in retries. If you run with `--verbose` you'll see +Rclone has built-in retries. If you run with `--verbose` you'll see error and retry messages such as shown below. This is usually not a bug. -If at the end of the run you see `Bisync successful` and not +If at the end of the run, you see `Bisync successful` and not `Bisync critical error` or `Bisync aborted` then the run was successful, and you can ignore the error messages. The following run shows an intermittent fail. Lines _5_ and _6- are -low level messages. Line _6_ is a bubbled-up _warning_ message, conveying +low-level messages. Line _6_ is a bubbled-up _warning_ message, conveying the error. Rclone normally retries failing commands, so there may be numerous such messages in the log. @@ -17084,7 +19202,7 @@ and an OwnCloud server, with output logged to a runlog file: */5 * * * * /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log ``` -See [crontab syntax](https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES)). +See [crontab syntax](https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES) for the details of crontab time interval expressions. If you run `rclone bisync` as a cron job, redirect stdout/stderr to a file. @@ -17276,7 +19394,7 @@ test command flags can be equally prefixed by a single `-` or double dash. synched tree even if there are check file mismatches in the test tree. - Some Dropbox tests can fail, notably printing the following message: `src and dst identical but can't set mod time without deleting and re-uploading` - This is expected and happens due a way Dropbox handles modification times. + This is expected and happens due to the way Dropbox handles modification times. You should use the `-refresh-times` test flag to make up for this. - If Dropbox tests hit request limit for you and print error message `too_many_requests/...: Too many requests or write operations.` @@ -17429,13 +19547,186 @@ with [@cjnaz](https://github.com/cjnaz)'s full support and encouragement. Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. -See the _Dual Shadow Method_ section in the +See the _Dual Shadow Method_ section in [Neil Fraser's article](https://neil.fraser.name/writing/sync/). Also note a number of academic publications by [Benjamin Pierce](http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization) about _Unison_ and synchronization in general. +## Changelog + +### `v1.64` +* Fixed an [issue](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Dry%20runs%20are%20not%20completely%20dry) +causing dry runs to inadvertently commit filter changes +* Fixed an [issue](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20%2D%2Dresync%20deletes%20data%2C%20contrary%20to%20docs) +causing `--resync` to erroneously delete empty folders and duplicate files unique to Path2 +* `--check-access` is now enforced during `--resync`, preventing data loss in [certain user error scenarios](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should) +* Fixed an [issue](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Bisync%20reads%20files%20in%20excluded%20directories%20during%20delete%20operations) +causing bisync to consider more files than necessary due to overbroad filters during delete operations +* [Improved detection of false positive change conflicts](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Identical%20files%20should%20be%20left%20alone%2C%20even%20if%20new/newer/changed%20on%20both%20sides) +(identical files are now left alone instead of renamed) +* Added [support for `--create-empty-src-dirs`](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20Bisync%20should%20create/delete%20empty%20directories%20as%20sync%20does%2C%20when%20%2D%2Dcreate%2Dempty%2Dsrc%2Ddirs%20is%20passed) +* Added experimental `--resilient` mode to allow [recovery from self-correctable errors](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20Bisync%20should%20be%20more%20resilient%20to%20self%2Dcorrectable%20errors) +* Added [new `--ignore-listing-checksum` flag](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20%2D%2Dignore%2Dchecksum%20should%20be%20split%20into%20two%20flags%20for%20separate%20purposes) +to distinguish from `--ignore-checksum` +* [Performance improvements](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20Deletes%20take%20several%20times%20longer%20than%20copies) for large remotes +* Documentation and testing improvements + +# Release signing + +The hashes of the binary artefacts of the rclone release are signed +with a public PGP/GPG key. This can be verified manually as described +below. + +The same mechanism is also used by [rclone selfupdate](https://rclone.org/commands/rclone_selfupdate/) +to verify that the release has not been tampered with before the new +update is installed. This checks the SHA256 hash and the signature +with a public key compiled into the rclone binary. + +## Release signing key + +You may obtain the release signing key from: + +- From [KEYS](/KEYS) on this website - this file contains all past signing keys also. +- The git repository hosted on GitHub - https://github.com/rclone/rclone/blob/master/docs/content/KEYS +- `gpg --keyserver hkps://keys.openpgp.org --search nick@craig-wood.com` +- `gpg --keyserver hkps://keyserver.ubuntu.com --search nick@craig-wood.com` +- https://www.craig-wood.com/nick/pub/pgp-key.txt + +After importing the key, verify that the fingerprint of one of the +keys matches: `FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA` as this key is used for signing. + +We recommend that you cross-check the fingerprint shown above through +the domains listed below. By cross-checking the integrity of the +fingerprint across multiple domains you can be confident that you +obtained the correct key. + +- The [source for this page on GitHub](https://github.com/rclone/rclone/blob/master/docs/content/release_signing.md). +- Through DNS `dig key.rclone.org txt` + +If you find anything that doesn't not match, please contact the +developers at once. + +## How to verify the release + +In the release directory you will see the release files and some files called `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS`. + +``` +$ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http: +MD5SUMS +SHA1SUMS +SHA256SUMS +rclone-v1.63.1-freebsd-386.zip +rclone-v1.63.1-freebsd-amd64.zip +... +rclone-v1.63.1-windows-arm64.zip +rclone-v1.63.1.tar.gz +version.txt +``` + +The `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS` contain hashes of the +binary files in the release directory along with a signature. + +For example: + +``` +$ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS +-----BEGIN PGP SIGNED MESSAGE----- +Hash: SHA1 + +f6d1b2d7477475ce681bdce8cb56f7870f174cb6b2a9ac5d7b3764296ea4a113 rclone-v1.63.1-freebsd-386.zip +7266febec1f01a25d6575de51c44ddf749071a4950a6384e4164954dff7ac37e rclone-v1.63.1-freebsd-amd64.zip +... +66ca083757fb22198309b73879831ed2b42309892394bf193ff95c75dff69c73 rclone-v1.63.1-windows-amd64.zip +bbb47c16882b6c5f2e8c1b04229378e28f68734c613321ef0ea2263760f74cd0 rclone-v1.63.1-windows-arm64.zip +-----BEGIN PGP SIGNATURE----- + +iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZLVKJQAKCRCTk14C/ztU ++pZuAJ0XJ+QWLP/3jCtkmgcgc4KAwd/rrwCcCRZQ7E+oye1FPY46HOVzCFU3L7g= +=8qrL +-----END PGP SIGNATURE----- +``` + +### Download the files + +The first step is to download the binary and SUMs file and verify that +the SUMs you have downloaded match. Here we download +`rclone-v1.63.1-windows-amd64.zip` - choose the binary (or binaries) +appropriate to your architecture. We've also chosen the `SHA256SUMS` +as these are the most secure. You could verify the other types of hash +also for extra security. `rclone selfupdate` verifies just the +`SHA256SUMS`. + +``` +$ mkdir /tmp/check +$ cd /tmp/check +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . +``` + +### Verify the signatures + +First verify the signatures on the SHA256 file. + +Import the key. See above for ways to verify this key is correct. + +``` +$ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood " imported +gpg: Total number processed: 1 +gpg: imported: 1 +``` + +Then check the signature: + +``` +$ gpg --verify SHA256SUMS +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST +gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: Good signature from "Nick Craig-Wood " [ultimate] +``` + +Verify the signature was good and is using the fingerprint shown above. + +Repeat for `MD5SUMS` and `SHA1SUMS` if desired. + +### Verify the hashes + +Now that we know the signatures on the hashes are OK we can verify the +binaries match the hashes, completing the verification. + +``` +$ sha256sum -c SHA256SUMS 2>&1 | grep OK +rclone-v1.63.1-windows-amd64.zip: OK +``` + +Or do the check with rclone + +``` +$ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 49 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: 4 warning(s) suppressed... += rclone-v1.63.1-windows-amd64.zip +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 0 differences found +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 1 matching files +``` + +### Verify signatures and hashes together + +You can verify the signatures and hashes in one command line like this: + +``` +$ gpg --decrypt SHA256SUMS | sha256sum -c --ignore-missing +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST +gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: Good signature from "Nick Craig-Wood " [ultimate] +gpg: aka "Nick Craig-Wood " [unknown] +rclone-v1.63.1-windows-amd64.zip: OK +``` + # 1Fichier This is a backend for the [1fichier](https://1fichier.com) cloud @@ -18100,6 +20391,7 @@ The S3 backend can be used with a number of different providers: - IBM COS S3 - IDrive e2 - IONOS Cloud + - Leviia Object Storage - Liara Object Storage - Minio - Petabox @@ -18110,6 +20402,7 @@ The S3 backend can be used with a number of different providers: - SeaweedFS - StackPath - Storj +- Synology C2 Object Storage - Tencent Cloud Object Storage (COS) - Wasabi @@ -18540,6 +20833,19 @@ $ rclone -q --s3-versions ls s3:cleanup-test 9 one.txt ``` +#### Versions naming caveat + +When using `--s3-versions` flag rclone is relying on the file name +to work out whether the objects are versions or not. Versions' names +are created by inserting timestamp between file name and its extension. +``` + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt +``` +If there are real files present with the same names as versions, then +behaviour of `--s3-versions` can be unpredictable. + ### Cleanup If you run `rclone cleanup s3:bucket` then it will remove all pending @@ -18727,7 +21033,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t ### Standard options -Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). #### --s3-provider @@ -18768,6 +21074,8 @@ Properties: - IONOS Cloud - "LyveCloud" - Seagate Lyve Cloud + - "Leviia" + - Leviia Object Storage - "Liara" - Liara Object Storage - "Minio" @@ -18786,6 +21094,8 @@ Properties: - StackPath Object Storage - "Storj" - Storj (S3 Compatible Gateway) + - "Synology" + - Synology C2 Object Storage - "TencentCOS" - Tencent Cloud Object Storage (COS) - "Wasabi" @@ -19139,6 +21449,30 @@ Properties: #### --s3-region +Region where your data stored. + + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Synology +- Type: string +- Required: false +- Examples: + - "eu-001" + - Europe Region 1 + - "eu-002" + - Europe Region 2 + - "us-001" + - US Region 1 + - "us-002" + - US Region 2 + - "tw-001" + - Asia (Taiwan) + +#### --s3-region + Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. @@ -19147,7 +21481,7 @@ Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive +- Provider: !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive - Type: string - Required: false - Examples: @@ -19453,6 +21787,22 @@ Properties: #### --s3-endpoint +Endpoint for Leviia Object Storage API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: Leviia +- Type: string +- Required: false +- Examples: + - "s3.leviia.com" + - The default endpoint + - Leviia + +#### --s3-endpoint + Endpoint for Liara Object Storage API. Properties: @@ -19643,6 +21993,29 @@ Properties: #### --s3-endpoint +Endpoint for Synology C2 Object Storage API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: Synology +- Type: string +- Required: false +- Examples: + - "eu-001.s3.synologyc2.net" + - EU Endpoint 1 + - "eu-002.s3.synologyc2.net" + - EU Endpoint 2 + - "us-001.s3.synologyc2.net" + - US Endpoint 1 + - "us-002.s3.synologyc2.net" + - US Endpoint 2 + - "tw-001.s3.synologyc2.net" + - TW Endpoint 1 + +#### --s3-endpoint + Endpoint for Tencent COS API. Properties: @@ -19780,7 +22153,7 @@ Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,RackCorp,Qiniu,Petabox +- Provider: !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox - Type: string - Required: false - Examples: @@ -20168,7 +22541,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox +- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox - Type: string - Required: false @@ -20191,7 +22564,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Cloudflare +- Provider: !Storj,Synology,Cloudflare - Type: string - Required: false - Examples: @@ -20446,7 +22819,7 @@ Properties: ### Advanced options -Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). #### --s3-bucket-acl @@ -20944,10 +23317,7 @@ Properties: #### --s3-memory-pool-flush-time -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -20958,7 +23328,7 @@ Properties: #### --s3-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: @@ -21224,17 +23594,17 @@ to normal storage. Usage Examples: - rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] + rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags - rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 All the objects shown will be marked for restore, then - rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message @@ -21243,11 +23613,11 @@ if not. [ { "Status": "OK", - "Path": "test.txt" + "Remote": "test.txt" }, { "Status": "OK", - "Path": "test/file4.txt" + "Remote": "test/file4.txt" } ] @@ -21259,6 +23629,51 @@ Options: - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk +### restore-status + +Show the restore status for objects being restored from GLACIER to normal storage + + rclone backend restore-status remote: [options] [+] + +This command can be used to show the status for objects being restored from GLACIER +to normal storage. + +Usage Examples: + + rclone backend restore-status s3:bucket/path/to/object + rclone backend restore-status s3:bucket/path/to/directory + rclone backend restore-status -o all s3:bucket/path/to/directory + +This command does not obey the filters. + +It returns a list of status dictionaries. + + [ + { + "Remote": "file.txt", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "GLACIER" + }, + { + "Remote": "test.pdf", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": false, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "DEEP_ARCHIVE" + } + ] + + +Options: + +- "all": if set then show all objects, not just ones with restore status + ### list-multipart-uploads List the unfinished multipart uploads @@ -21353,6 +23768,30 @@ It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning has been enabled the status can't be set back to "Unversioned". +### set + +Set command for updating the config parameters. + + rclone backend set remote: [options] [+] + +This set command can be used to update the config parameters +for a running s3 backend. + +Usage Examples: + + rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X + +The option keys are named as they are in the config file. + +This rebuilds the connection to the s3 backend when it is called with +the new parameters. Only new parameters need be passed as the values +will default to those currently in use. + +It doesn't return anything. + + ### Anonymous access to public buckets @@ -21499,7 +23938,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) ... Storage> s3 @@ -21683,7 +24122,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> 5 @@ -21978,7 +24417,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> s3 @@ -22084,7 +24523,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> s3 @@ -22330,7 +24769,7 @@ Choose a number from below, or type in your own value \ (alias) 4 / Amazon Drive \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi \ (s3) [snip] Storage> s3 @@ -23210,7 +25649,129 @@ e) Edit this remote d) Delete this remote y/e/d> y ``` +### Leviia Cloud Object Storage {#leviia} +[Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure your data in a 100% French cloud, independent of GAFAM.. + +To configure access to Leviia, follow the steps below: + +1. Run `rclone config` and select `n` for a new remote. + +``` +rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +``` + +2. Give the name of the configuration. For example, name it 'leviia'. + +``` +name> leviia +``` + +3. Select `s3` storage. + +``` +Choose a number from below, or type in your own value + 1 / 1Fichier + \ (fichier) + 2 / Akamai NetStorage + \ (netstorage) + 3 / Alias for an existing remote + \ (alias) + 4 / Amazon Drive + \ (amazon cloud drive) + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \ (s3) +[snip] +Storage> s3 +``` + +4. Select `Leviia` provider. +``` +Choose a number from below, or type in your own value +1 / Amazon Web Services (AWS) S3 + \ "AWS" +[snip] +15 / Leviia Object Storage + \ (Leviia) +[snip] +provider> Leviia +``` + +5. Enter your SecretId and SecretKey of Leviia. + +``` +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default ("false"). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +access_key_id> ZnIx.xxxxxxxxxxxxxxx +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). +secret_access_key> xxxxxxxxxxx +``` + +6. Select endpoint for Leviia. + +``` + / The default endpoint + 1 | Leviia. + \ (s3.leviia.com) +[snip] +endpoint> 1 +``` +7. Choose acl. + +``` +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) +[snip] +acl> 1 +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +-------------------- +[leviia] +- type: s3 +- provider: Leviia +- access_key_id: ZnIx.xxxxxxx +- secret_access_key: xxxxxxxx +- endpoint: s3.leviia.com +- acl: private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +leviia s3 +``` ### Liara {#liara-cloud} Here is an example of making a [Liara Object Storage](https://liara.ir/landing/object-storage) @@ -23828,6 +26389,140 @@ remote. See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + +### Synology C2 Object Storage {#synology-c2} + +[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty. + +The S3 compatible gateway is configured using `rclone config` with a +type of `s3` and with a provider name of `Synology`. Here is an example +run of the configurator. + +First run: + +``` +rclone config +``` + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config + +n/s/q> n + +Enter name for new remote.1 +name> syno + +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \ "s3" + +Storage> s3 + +Choose your S3 provider. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 24 / Synology C2 Object Storage + \ (Synology) + +provider> Synology + +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default ("false"). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + +env_auth> 1 + +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). + +access_key_id> accesskeyid + +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (""). + +secret_access_key> secretaccesskey + +Region where your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Europe Region 1 + \ (eu-001) + 2 / Europe Region 2 + \ (eu-002) + 3 / US Region 1 + \ (us-001) + 4 / US Region 2 + \ (us-002) + 5 / Asia (Taiwan) + \ (tw-001) + +region > 1 + +Option endpoint. +Endpoint for Synology C2 Object Storage API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / EU Endpoint 1 + \ (eu-001.s3.synologyc2.net) + 2 / US Endpoint 1 + \ (us-001.s3.synologyc2.net) + 3 / TW Endpoint 1 + \ (tw-001.s3.synologyc2.net) + +endpoint> 1 + +Option location_constraint. +Location constraint - must be set to match the Region. +Leave blank if not sure. Used when creating buckets only. +Enter a value. Press Enter to leave empty. +location_constraint> + +Edit advanced config? (y/n) +y) Yes +n) No +y/n> y + +Option no_check_bucket. +If set, don't attempt to check the bucket exists or create it. +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. +It can also be needed if the user you are using does not have bucket +creation permissions. Before v1.52.0 this would have passed silently +due to a bug. +Enter a boolean value (true or false). Press Enter for the default (true). + +no_check_bucket> true + +Configuration complete. +Options: +- type: s3 +- provider: Synology +- region: eu-001 +- endpoint: eu-001.s3.synologyc2.net +- no_check_bucket: true +Keep this "syno" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote + +y/e/d> y + # Backblaze B2 B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/). @@ -24055,6 +26750,19 @@ $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt ``` +#### Versions naming caveat + +When using `--b2-versions` flag rclone is relying on the file name +to work out whether the objects are versions or not. Versions' names +are created by inserting timestamp between file name and its extension. +``` + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt +``` +If there are real files present with the same names as versions, then +behaviour of `--b2-versions` can be unpredictable. + ### Data usage It is useful to know how many requests are sent to the server in different scenarios. @@ -24303,6 +27011,24 @@ Properties: - Type: SizeSuffix - Default: 96Mi +#### --b2-upload-concurrency + +Concurrency for multipart uploads. + +This is the number of chunks of the same file that are uploaded +concurrently. + +Note that chunks are stored in memory and there may be up to +"--transfers" * "--b2-upload-concurrency" chunks stored at once +in memory. + +Properties: + +- Config: upload_concurrency +- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY +- Type: int +- Default: 16 + #### --b2-disable-checksum Disable checksums for large (> upload cutoff) files. @@ -24361,9 +27087,7 @@ Properties: #### --b2-memory-pool-flush-time -How often internal memory buffer pools will be flushed. -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -24374,7 +27098,7 @@ Properties: #### --b2-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: @@ -24841,6 +27565,28 @@ Properties: - Type: string - Required: false +#### --box-impersonate + +Impersonate this user ID when using a service account. + +Settng this flag allows rclone, when using a JWT service account, to +act on behalf of another user by setting the as-user header. + +The user ID is the Box identifier for a user. User IDs can found for +any user via the GET /users endpoint, which is only available to +admins, or by calling the GET /users/me endpoint with an authenticated +user session. + +See: https://developer.box.com/guides/authentication/jwt/as-user/ + + +Properties: + +- Config: impersonate +- Env Var: RCLONE_BOX_IMPERSONATE +- Type: string +- Required: false + #### --box-encoding The encoding for the backend. @@ -24876,6 +27622,31 @@ remote. See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +## Get your own Box App ID + +Here is how to create your own Box App ID for rclone: + +1. Go to the [Box Developer Console](https://app.box.com/developers/console) +and login, then click `My Apps` on the sidebar. Click `Create New App` +and select `Custom App`. + +2. In the first screen on the box that pops up, you can pretty much enter +whatever you want. The `App Name` can be whatever. For `Purpose` choose +automation to avoid having to fill out anything else. Click `Next`. + +3. In the second screen of the creation screen, select +`User Authentication (OAuth 2.0)`. Then click `Create App`. + +4. You should now be on the `Configuration` tab of your new app. If not, +click on it at the top of the webpage. Copy down `Client ID` +and `Client Secret`, you'll need those for rclone. + +5. Under "OAuth 2.0 Redirect URI", add `http://127.0.0.1:53682/` + +6. For `Application Scopes`, select `Read all files and folders stored in Box` +and `Write all files and folders stored in box` (assuming you want to do both). +Leave others unchecked. Click `Save Changes` at the top right. + # Cache The `cache` remote wraps another existing remote and stores file structure @@ -25651,7 +28422,7 @@ will put files in a directory called `name` in the current directory. When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file -to the wrapped remote. If a file is large, chunker will transparently cut +to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance @@ -25683,6 +28454,14 @@ proceed with current command. You can set the `--chunker-fail-hard` flag to have commands abort with error message in such cases. +**Caveat**: As it is now, chunker will always create a temporary file in the +backend and then rename it, even if the file is below the chunk threshold. +This will result in unnecessary API calls and can severely restrict throughput +when handling transfers primarily composed of small files on some backends (e.g. Box). +A workaround to this issue is to use chunker only for files above the chunk threshold +via `--min-size` and then perform a separate call without chunker on the remaining +files. + #### Chunk names @@ -26175,6 +28954,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to sharefile (Citrix Sharefile). +#### --sharefile-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_SHAREFILE_CLIENT_ID +- Type: string +- Required: false + +#### --sharefile-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_SHAREFILE_CLIENT_SECRET +- Type: string +- Required: false + #### --sharefile-root-folder-id ID of the root folder. @@ -26204,6 +29009,43 @@ Properties: Here are the Advanced options specific to sharefile (Citrix Sharefile). +#### --sharefile-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_SHAREFILE_TOKEN +- Type: string +- Required: false + +#### --sharefile-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_SHAREFILE_AUTH_URL +- Type: string +- Required: false + +#### --sharefile-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_SHAREFILE_TOKEN_URL +- Type: string +- Required: false + #### --sharefile-upload-cutoff Cutoff for switching to multipart upload. @@ -26651,7 +29493,7 @@ address this problem to a certain degree. For cloud storage systems with case sensitive file names (e.g. Google Drive), `base64` can be used to reduce file name length. For cloud storage systems using UTF-16 to store file names internally -(e.g. OneDrive, Dropbox), `base32768` can be used to drastically reduce +(e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce file name length. An alternative, future rclone file name encryption mode may tolerate @@ -27882,7 +30724,7 @@ to be the same account as the Dropbox you want to access) 2. Choose an API => Usually this should be `Dropbox API` -3. Choose the type of access you want to use => `Full Dropbox` or `App Folder` +3. Choose the type of access you want to use => `Full Dropbox` or `App Folder`. If you want to use Team Folders, `Full Dropbox` is required ([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)). 4. Name your App. The app name is global, so you can't use `rclone` for example @@ -27890,7 +30732,7 @@ to be the same account as the Dropbox you want to access) 6. Switch to the `Permissions` tab. Enable at least the following permissions: `account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, `sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be marked too. Click `Submit` -7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` +7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` and click on `Add` 8. Find the `App key` and `App secret` values on the `Settings` tab. Use these values in rclone config to add a new remote or edit an existing remote. The `App key` setting corresponds to `client_id` in rclone config, the `App secret` corresponds to `client_secret` @@ -28577,6 +31419,24 @@ Properties: - Type: bool - Default: false +#### --ftp-socks-proxy + +Socks 5 proxy host. + + Supports the format user:pass@host:port, user@host:port, host:port. + + Example: + + myUser:myPass@localhost:9005 + + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_FTP_SOCKS_PROXY +- Type: string +- Required: false + #### --ftp-encoding The encoding for the backend. @@ -30546,7 +33406,7 @@ This resource key requirement only applies to a subset of old files. Note also that opening the folder once in the web interface (with the user you've authenticated rclone with) seems to be enough so that the -resource key is no needed. +resource key is not needed. Properties: @@ -30556,6 +33416,34 @@ Properties: - Type: string - Required: false +#### --drive-fast-list-bug-fix + +Work around a bug in Google Drive listing. + +Normally rclone will work around a bug in Google Drive when using +--fast-list (ListR) where the search "(A in parents) or (B in +parents)" returns nothing sometimes. See #3114, #4289 and +https://issuetracker.google.com/issues/149522397 + +Rclone detects this by finding no items in more than one directory +when listing and retries them as lists of individual directories. + +This means that if you have a lot of empty directories rclone will end +up listing them all individually and this can take many more API +calls. + +This flag allows the work-around to be disabled. This is **not** +recommended in normal use - only if you have a particular case you are +having trouble with like many empty directories. + + +Properties: + +- Config: fast_list_bug_fix +- Env Var: RCLONE_DRIVE_FAST_LIST_BUG_FIX +- Type: bool +- Default: true + #### --drive-encoding The encoding for the backend. @@ -30874,7 +33762,7 @@ be the same account as the Google Drive you want to access) "Google Drive API". 4. Click "Credentials" in the left-side panel (not "Create -credentials", which opens the wizard), then "Create credentials" +credentials", which opens the wizard). 5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button @@ -32936,6 +35824,8 @@ it also provides white-label solutions to different companies, such as: * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) +* Onlime + * Onlime Cloud Storage (onlime.dk) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) @@ -33006,6 +35896,18 @@ Tele2 Cloud customers as no support for creating a CLI token exists, and additio authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup. +### Onlime Cloud Storage authentication + +Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but +have recently set up their own hosting, transferring their customers from Jottacloud servers to their +own ones. + +This, of course, necessitates using their servers for authentication, but otherwise functionality and +architecture seems equivalent to Jottacloud. + +To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest +of the setup is identical to the default setup. + ## Configuration Here is an example of how to make a remote called `remote` with the default setup. First run: @@ -33049,6 +35951,9 @@ Press Enter for the default (standard). / Tele2 Cloud authentication. 4 | Use this if you are using Tele2 Cloud. \ (tele2) + / Onlime Cloud authentication. + 5 | Use this if you are using Onlime Cloud. + \ (onlime) config_type> 1 Personal login token. Generate here: https://www.jottacloud.com/web/secure @@ -33210,10 +36115,77 @@ command which will display your usage limit (unless it is unlimited) and the current usage. +### Standard options + +Here are the Standard options specific to jottacloud (Jottacloud). + +#### --jottacloud-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_JOTTACLOUD_CLIENT_ID +- Type: string +- Required: false + +#### --jottacloud-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_JOTTACLOUD_CLIENT_SECRET +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to jottacloud (Jottacloud). +#### --jottacloud-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_JOTTACLOUD_TOKEN +- Type: string +- Required: false + +#### --jottacloud-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_JOTTACLOUD_AUTH_URL +- Type: string +- Required: false + +#### --jottacloud-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_JOTTACLOUD_TOKEN_URL +- Type: string +- Required: false + #### --jottacloud-md5-memory-limit Files bigger than this will be cached on disk to calculate the MD5 if required. @@ -33696,8 +36668,6 @@ y/e/d> y [Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS. -Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented. - ## Features highlights - Paths may be as deep as required, e.g. `remote:directory/subdirectory` @@ -33864,6 +36834,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to mailru (Mail.ru Cloud). +#### --mailru-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_MAILRU_CLIENT_ID +- Type: string +- Required: false + +#### --mailru-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_MAILRU_CLIENT_SECRET +- Type: string +- Required: false + #### --mailru-user User name (usually email). @@ -33922,6 +36918,43 @@ Properties: Here are the Advanced options specific to mailru (Mail.ru Cloud). +#### --mailru-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_MAILRU_TOKEN +- Type: string +- Required: false + +#### --mailru-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_MAILRU_AUTH_URL +- Type: string +- Required: false + +#### --mailru-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_MAILRU_TOKEN_URL +- Type: string +- Required: false + #### --mailru-speedup-file-patterns Comma separated list of file name patterns eligible for speedup (put by hash). @@ -34332,6 +37365,10 @@ Properties: +### Process `killed` + +On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). + ## Limitations This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource @@ -35412,10 +38449,7 @@ Properties: #### --azureblob-memory-pool-flush-time -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -35426,7 +38460,7 @@ Properties: #### --azureblob-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: @@ -36565,13 +39599,17 @@ remote. See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Oracle Object Storage -[Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) - -[Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) +- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) +- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) +- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. +Sample command to transfer local artifacts to remote:bucket in oracle object storage: + +`rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv` + ## Configuration Here is an example of making an oracle object storage configuration. `rclone config` walks you @@ -36695,7 +39733,7 @@ List the contents of a bucket rclone ls remote:bucket rclone ls remote:bucket --max-depth 1 -### OCI Authentication Provider +## Authentication Providers OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) @@ -36708,7 +39746,7 @@ Rclone supports the following OCI authentication provider. Resource Principal No authentication -#### Authentication provider choice: User Principal +### User Principal Sample rclone config file for Authentication Provider User Principal: [oos] @@ -36728,7 +39766,7 @@ Considerations: - Overhead of managing users and keys. - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials. -#### Authentication provider choice: Instance Principal +### Instance Principal An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed. @@ -36757,7 +39795,7 @@ Considerations: - Everyone who has access to this machine can execute the CLI commands. - It is applicable for oci compute instances only. It cannot be used on external instance or resources. -#### Authentication provider choice: Resource Principal +### Resource Principal Resource principal auth is very similar to instance principal auth but used for resources that are not compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). To use resource principal ensure Rclone process is started with these environment variables set in its process. @@ -36776,7 +39814,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal: region = us-ashburn-1 provider = resource_principal_auth -#### Authentication provider choice: No authentication +### No authentication Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication: @@ -36979,9 +40017,8 @@ Properties: Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown -size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google -photos or google docs) they will be uploaded as multipart uploads -using this chunk size. +size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded +as multipart uploads using this chunk size. Note that "upload_concurrency" chunks of this size are buffered in memory per transfer. @@ -37009,6 +40046,26 @@ Properties: - Type: SizeSuffix - Default: 5Mi +#### --oos-max-upload-parts + +Maximum number of parts in a multipart upload. + +This option defines the maximum number of multipart chunks to use +when doing a multipart upload. + +OCI has max parts limit of 10,000 chunks. + +Rclone will automatically increase the chunk size when uploading a +large file of a known size to stay below this number of chunks limit. + + +Properties: + +- Config: max_upload_parts +- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS +- Type: int +- Default: 10000 + #### --oos-upload-concurrency Concurrency for multipart uploads. @@ -37088,7 +40145,7 @@ Properties: #### --oos-leave-parts-on-error -If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. +If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. It should be set to true for resuming uploads across different sessions. @@ -37103,6 +40160,24 @@ Properties: - Type: bool - Default: false +#### --oos-attempt-resume-upload + +If true attempt to resume previously started multipart upload for the object. +This will be helpful to speed up multipart transfers by resuming uploads from past session. + +WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is +aborted and a new multipart upload is started with the new chunk size. + +The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. + + +Properties: + +- Config: attempt_resume_upload +- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD +- Type: bool +- Default: false + #### --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it. @@ -37286,6 +40361,9 @@ Options: +## Tutorials +### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) + # QingStor Paths are specified as `remote:bucket` (or `remote:` for the `lsd` @@ -37603,6 +40681,250 @@ remote. See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +# Quatrix + +Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). + +Paths are specified as `remote:path` + +Paths may be as deep as required, e.g., `remote:directory/subdirectory`. + +The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https:///profile/api-keys` +or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. + +See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer + +## Configuration + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Quatrix by Maytech + \ "quatrix" +[snip] +Storage> quatrix +API key for accessing Quatrix account. +api_key> your_api_key +Host name of Quatrix account. +host> example.quatrix.it + +-------------------- +[remote] +api_key = your_api_key +host = example.quatrix.it +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Once configured you can then use `rclone` like this, + +List directories in top level of your Quatrix + + rclone lsd remote: + +List all the files in your Quatrix + + rclone ls remote: + +To copy a local directory to an Quatrix directory called backup + + rclone copy /home/source remote:backup + +### API key validity + +API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. +After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can +update it in rclone config. The same happens if the hostname was changed. + +``` +$ rclone config +Current remotes: + +Name Type +==== ==== +remote quatrix + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> e +Choose a number from below, or type in an existing value + 1 > remote +remote> remote +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +Edit remote +Option api_key. +API key for accessing Quatrix account +Enter a string value. Press Enter for the default (your_api_key) +api_key> +Option host. +Host name of Quatrix account +Enter a string value. Press Enter for the default (some_host.quatrix.it). + +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### Modified time and hashes + +Quatrix allows modification times to be set on objects accurate to 1 microsecond. +These will be used to detect whether objects need syncing or not. + +Quatrix does not support hashes, so you cannot use the `--checksum` flag. + +### Restricted filename characters + +File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to `.` or `..` nor contain `/` , `\` or non-printable ascii. + +### Transfers + +For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all multipart uploads). +Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing `--transfers` will increase the memory use. +The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. +The size of the uploaded chunk will dynamically change depending on the upload speed. +The total memory use equals the number of transfers multiplied by the minimal chunk size. +In case there's free memory allocated for the upload (which equals the difference of `maximal_summary_chunk_size` and `minimal_chunk_size` * `transfers`), +the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. +If no free memory is available, all chunks will equal `minimal_chunk_size`. + +### Deleting files + +Files you delete with rclone will end up in Trash and be stored there for 30 days. +Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account. + + +### Standard options + +Here are the Standard options specific to quatrix (Quatrix by Maytech). + +#### --quatrix-api-key + +API key for accessing Quatrix account + +Properties: + +- Config: api_key +- Env Var: RCLONE_QUATRIX_API_KEY +- Type: string +- Required: true + +#### --quatrix-host + +Host name of Quatrix account + +Properties: + +- Config: host +- Env Var: RCLONE_QUATRIX_HOST +- Type: string +- Required: true + +### Advanced options + +Here are the Advanced options specific to quatrix (Quatrix by Maytech). + +#### --quatrix-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_QUATRIX_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + +#### --quatrix-effective-upload-time + +Wanted upload time for one chunk + +Properties: + +- Config: effective_upload_time +- Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME +- Type: string +- Default: "4s" + +#### --quatrix-minimal-chunk-size + +The minimal size for one chunk + +Properties: + +- Config: minimal_chunk_size +- Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE +- Type: SizeSuffix +- Default: 9.537Mi + +#### --quatrix-maximal-summary-chunk-size + +The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' + +Properties: + +- Config: maximal_summary_chunk_size +- Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE +- Type: SizeSuffix +- Default: 95.367Mi + +#### --quatrix-hard-delete + +Delete files permanently rather than putting them into the trash. + +Properties: + +- Config: hard_delete +- Env Var: RCLONE_QUATRIX_HARD_DELETE +- Type: bool +- Default: false + + + +## Storage usage + +The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. +The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. +This can be fixed by freeing up the space or increasing the quota. + +## Server-side operations + +Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. + # Sia Sia ([sia.tech](https://sia.tech/)) is a decentralized cloud storage platform @@ -39119,6 +42441,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to premiumizeme (premiumize.me). +#### --premiumizeme-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +- Type: string +- Required: false + +#### --premiumizeme-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +- Type: string +- Required: false + #### --premiumizeme-api-key API Key. @@ -39137,6 +42485,43 @@ Properties: Here are the Advanced options specific to premiumizeme (premiumize.me). +#### --premiumizeme-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PREMIUMIZEME_TOKEN +- Type: string +- Required: false + +#### --premiumizeme-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +- Type: string +- Required: false + +#### --premiumizeme-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +- Type: string +- Required: false + #### --premiumizeme-encoding The encoding for the backend. @@ -39163,6 +42548,357 @@ rclone maps these to and from an identical looking unicode equivalents premiumize.me only supports filenames up to 255 characters in length. +# Proton Drive + +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. + +This is an rclone backend for Proton Drive which supports the file transfer +features of Proton Drive using the same client-side encryption. + +Due to the fact that Proton Drive doesn't publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client +source code and observing the Proton Drive traffic in the browser. + +**NB** This backend is currently in Beta. It is believed to be correct +and all the integration tests pass. However the Proton Drive protocol +has evolved over time there may be accounts it is not compatible +with. Please [post on the rclone forum](https://forum.rclone.org/) if +you find an incompatibility. + +Paths are specified as `remote:path` + +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + +## Configurations + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \ "Proton Drive" +[snip] +Storage> protondrive +User name +user> you@protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you@protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the +credentials in `rclone` will fail. + +Once configured you can then use `rclone` like this, + +List directories in top level of your Proton Drive + + rclone lsd remote: + +List all the files in your Proton Drive + + rclone ls remote: + +To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + +### Modified time + +Proton Drive Bridge does not support updating modification times yet. + +### Restricted filename characters + +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) + +### Duplicated files + +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not +be overwritten. + +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) + +Please set your mailbox password in the advanced config section. + +### Caching + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + + +### Standard options + +Here are the Standard options specific to protondrive (Proton Drive). + +#### --protondrive-username + +The username of your proton account + +Properties: + +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true + +#### --protondrive-password + +The password of your proton account. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true + +#### --protondrive-2fa + +The 2FA code + +The value can also be provided with --protondrive-2fa=000000 + +The 2FA code of your proton drive account if the account is set up with +two-factor authentication + +Properties: + +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to protondrive (Proton Drive). + +#### --protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +#### --protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +#### --protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + +#### --protondrive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + +#### --protondrive-original-file-size + +Return the file size before encryption + +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original content +size, will fail to operate properly + +Properties: + +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true + +#### --protondrive-app-version + +The app version string + +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with every +API request. + +Properties: + +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: "macos-drive@1.0.0-alpha.1+rclone" + +#### --protondrive-replace-existing-draft + +Create a new revision when filename conflict is detected + +When a file upload is cancelled or failed before completion, a draft will be +created and the subsequent upload of the same file to the same location will be +reported as a conflict. + +The value can also be set by --protondrive-replace-existing-draft=true + +If the option is set to true, the draft will be replaced and then the upload +operation will restart. If there are other clients also uploading at the same +file location at the same time, the behavior is currently unknown. Need to set +to true for integration tests. +If the option is set to false, an error "a draft exist - usually this means a +file is being uploaded at another client, or, there was a failed upload attempt" +will be returned, and no upload will happen. + +Properties: + +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false + +#### --protondrive-enable-caching + +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, +as the current implementation doesn't update or clear the cache when there are +external changes. + +The files and folders on ProtonDrive are represented as links with keyrings, +which can be cached to improve performance and be friendly to the API server. + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + + + +## Limitations + +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don't need to completely +reverse engineer the APIs by observing the web client traffic! + +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn't official +documentation available. + # put.io Paths are specified as `remote:path` @@ -39274,10 +43010,77 @@ Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid as they can't be used in JSON strings. +### Standard options + +Here are the Standard options specific to putio (Put.io). + +#### --putio-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PUTIO_CLIENT_ID +- Type: string +- Required: false + +#### --putio-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PUTIO_CLIENT_SECRET +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to putio (Put.io). +#### --putio-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PUTIO_TOKEN +- Type: string +- Required: false + +#### --putio-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PUTIO_AUTH_URL +- Type: string +- Required: false + +#### --putio-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PUTIO_TOKEN_URL +- Type: string +- Required: false + #### --putio-encoding The encoding for the backend. @@ -39302,6 +43105,357 @@ If you want to avoid ever hitting these limits, you may use the `--tpslimit` flag with a low number. Note that the imposed limits may be different for different operations, and may change over time. +# Proton Drive + +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. + +This is an rclone backend for Proton Drive which supports the file transfer +features of Proton Drive using the same client-side encryption. + +Due to the fact that Proton Drive doesn't publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client +source code and observing the Proton Drive traffic in the browser. + +**NB** This backend is currently in Beta. It is believed to be correct +and all the integration tests pass. However the Proton Drive protocol +has evolved over time there may be accounts it is not compatible +with. Please [post on the rclone forum](https://forum.rclone.org/) if +you find an incompatibility. + +Paths are specified as `remote:path` + +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + +## Configurations + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \ "Proton Drive" +[snip] +Storage> protondrive +User name +user> you@protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you@protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the +credentials in `rclone` will fail. + +Once configured you can then use `rclone` like this, + +List directories in top level of your Proton Drive + + rclone lsd remote: + +List all the files in your Proton Drive + + rclone ls remote: + +To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + +### Modified time + +Proton Drive Bridge does not support updating modification times yet. + +### Restricted filename characters + +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) + +### Duplicated files + +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not +be overwritten. + +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) + +Please set your mailbox password in the advanced config section. + +### Caching + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + + +### Standard options + +Here are the Standard options specific to protondrive (Proton Drive). + +#### --protondrive-username + +The username of your proton account + +Properties: + +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true + +#### --protondrive-password + +The password of your proton account. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true + +#### --protondrive-2fa + +The 2FA code + +The value can also be provided with --protondrive-2fa=000000 + +The 2FA code of your proton drive account if the account is set up with +two-factor authentication + +Properties: + +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to protondrive (Proton Drive). + +#### --protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +#### --protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +#### --protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + +#### --protondrive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + +#### --protondrive-original-file-size + +Return the file size before encryption + +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original content +size, will fail to operate properly + +Properties: + +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true + +#### --protondrive-app-version + +The app version string + +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with every +API request. + +Properties: + +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: "macos-drive@1.0.0-alpha.1+rclone" + +#### --protondrive-replace-existing-draft + +Create a new revision when filename conflict is detected + +When a file upload is cancelled or failed before completion, a draft will be +created and the subsequent upload of the same file to the same location will be +reported as a conflict. + +The value can also be set by --protondrive-replace-existing-draft=true + +If the option is set to true, the draft will be replaced and then the upload +operation will restart. If there are other clients also uploading at the same +file location at the same time, the behavior is currently unknown. Need to set +to true for integration tests. +If the option is set to false, an error "a draft exist - usually this means a +file is being uploaded at another client, or, there was a failed upload attempt" +will be returned, and no upload will happen. + +Properties: + +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false + +#### --protondrive-enable-caching + +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, +as the current implementation doesn't update or clear the cache when there are +external changes. + +The files and folders on ProtonDrive are represented as links with keyrings, +which can be cached to improve performance and be friendly to the API server. + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + + + +## Limitations + +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don't need to completely +reverse engineer the APIs by observing the web client traffic! + +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn't official +documentation available. + # Seafile This is a backend for the [Seafile](https://www.seafile.com/) storage service: @@ -40241,6 +44395,42 @@ Properties: - Type: bool - Default: false +#### --sftp-ssh + +Path and arguments to external ssh binary. + +Normally rclone will use its internal ssh library to connect to the +SFTP server. However it does not implement all possible ssh options so +it may be desirable to use an external ssh binary. + +Rclone ignores all the internal config if you use this option and +expects you to configure the ssh binary with the user/host/port and +any other options you need. + +**Important** The ssh command must log in without asking for a +password so needs to be configured with keys or certificates. + +Rclone will run the command supplied either with the additional +arguments "-s sftp" to access the SFTP subsystem or with commands such +as "md5sum /path/to/file" appended to read checksums. + +Any arguments with spaces in should be surrounded by "double quotes". + +An example setting might be: + + ssh -o ServerAliveInterval=20 user@example.com + +Note that when using an external ssh binary rclone makes a new ssh +connection for every hash it calculates. + + +Properties: + +- Config: ssh +- Env Var: RCLONE_SFTP_SSH +- Type: SpaceSepList +- Default: + ### Advanced options Here are the Advanced options specific to sftp (SSH/SFTP). @@ -40293,6 +44483,18 @@ E.g. if shared folders can be found in directories representing volumes: E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory + +To specify only the path to the SFTP remote's root, and allow rclone to add any relative subpaths automatically (including unwrapping/decrypting remotes as necessary), add the '@' character to the beginning of the path. + +E.g. the first example above could be rewritten as: + + rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2 + +Note that when using this method with Synology "home" folders, the full "/homes/USER" path should be specified instead of "/home". + +E.g. the second example above should be rewritten as: + + rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1 Properties: @@ -40388,6 +44590,15 @@ Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. +If adding server_command to the configuration file please note that +it should not be enclosed in quotes, since that will make rclone fail. + +A working example is: + + [remote_name] + type = sftp + server_command = sudo /usr/libexec/openssh/sftp-server + Properties: - Config: server_command @@ -40626,6 +44837,24 @@ Properties: - Type: SpaceSepList - Default: +#### --sftp-socks-proxy + +Socks 5 proxy host. + +Supports the format user:pass@host:port, user@host:port, host:port. + +Example: + + myUser:myPass@localhost:9005 + + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_SFTP_SOCKS_PROXY +- Type: string +- Required: false + ## Limitations @@ -41766,24 +45995,27 @@ been seen in the uptobox web interface. # Union -The `union` remote provides a unification similar to UnionFS using other remotes. - -Paths may be as deep as required or a local path, -e.g. `remote:directory/subdirectory` or `/directory/subdirectory`. +The `union` backend joins several remotes together to make a single unified view of them. During the initial setup with `rclone config` you will specify the upstream -remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. +remotes as a space separated list. The upstream remotes can either be a local +paths or other remotes. -Attribute `:ro` and `:nc` can be attach to the end of path to tag the remote as **read only** or **no create**, -e.g. `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. +The attributes `:ro`, `:nc` and `:nc` can be attached to the end of the remote +to tag the remote as **read only**, **no create** or **writeback**, e.g. +`remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. + +- `:ro` means files will only be read from here and never written +- `:nc` means new files or directories won't be created here +- `:writeback` means files found in different remotes will be written back here. See the [writeback section](#writeback) for more info. Subfolders can be used in upstream remotes. Assume a union remote named `backup` with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`. -There will be no special handling of paths containing `..` segments. -Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking -`rclone mkdir mydrive:private/backup/../desktop`. +There is no special handling of paths containing `..` segments. Invoking `rclone +mkdir backup:../desktop` is exactly the same as invoking `rclone mkdir +mydrive:private/backup/../desktop`. ## Configuration @@ -41933,6 +46165,36 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t | rand (random) | Calls **all** and then randomizes. Returns only one upstream. | +### Writeback {#writeback} + +The tag `:writeback` on an upstream remote can be used to make a simple cache +system like this: + +``` +[union] +type = union +action_policy = all +create_policy = all +search_policy = ff +upstreams = /local:writeback remote:dir +``` + +When files are opened for read, if the file is in `remote:dir` but not `/local` +then rclone will copy the file entirely into `/local` before returning a +reference to the file in `/local`. The copy will be done with the equivalent of +`rclone copy` so will use `--multi-thread-streams` if configured. Any copies +will be logged with an INFO log. + +When files are written, they will be written to both `remote:dir` and `/local`. + +As many remotes as desired can be added to `upstreams` but there should only be +one `:writeback` tag. + +Rclone does not manage the `:writeback` remote in any way other than writing +files back to it. So if you need to expire old files or manage the size then you +will have to do this yourself. + + ### Standard options Here are the Standard options specific to union (Union merges the contents of several upstream fs). @@ -43570,6 +47832,161 @@ Options: # Changelog +## v1.64.0 - 2023-09-11 + +[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0) + +* New backends + * [Proton Drive](https://rclone.org/protondrive/) (Chun-Hung Tseng) + * [Quatrix](https://rclone.org/quatrix/) (Oksana, Volodymyr Kit) + * New S3 providers + * [Synology C2](https://rclone.org/s3/#synology-c2) (BakaWang) + * [Leviia](https://rclone.org/s3/#leviia) (Benjamin) + * New Jottacloud providers + * [Onlime](https://rclone.org/jottacloud/) (Fjodor42) + * [Telia Sky](https://rclone.org/jottacloud/) (NoLooseEnds) +* Major changes + * Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj Ghosh, Edwin Mackenzie-Owen) + * Multi-thread transfers are now available when transferring to: + * `local`, `s3`, `azureblob`, `b2`, `oracleobjectstorage` and `smb` + * This greatly improves transfer speed between two network sources. + * In memory buffering has been unified between all backends and should share memory better. + * See [--multi-thread docs](https://rclone.org/docs/#multi-thread-cutoff) for more info +* New commands + * `rclone config redacted` support mechanism for showing redacted config (Nick Craig-Wood) +* New Features + * accounting + * Show server side stats in own lines and not as bytes transferred (Nick Craig-Wood) + * bisync + * Add new `--ignore-listing-checksum` flag to distinguish from `--ignore-checksum` (nielash) + * Add experimental `--resilient` mode to allow recovery from self-correctable errors (nielash) + * Add support for `--create-empty-src-dirs` (nielash) + * Dry runs no longer commit filter changes (nielash) + * Enforce `--check-access` during `--resync` (nielash) + * Apply filters correctly during deletes (nielash) + * Equality check before renaming (leave identical files alone) (nielash) + * Fix `dryRun` rc parameter being ignored (nielash) + * build + * Update to `go1.21` and make `go1.19` the minimum required version (Anagh Kumar Baranwal, Nick Craig-Wood) + * Update dependencies (Nick Craig-Wood) + * Add snap installation (hideo aoyama) + * Change Winget Releaser job to `ubuntu-latest` (sitiom) + * cmd: Refactor and use sysdnotify in more commands (eNV25) + * config: Add `--multi-thread-chunk-size` flag (Vitor Gomes) + * doc updates (antoinetran, Benjamin, Bjørn Smith, Dean Attali, gabriel-suela, James Braza, Justin Hellings, kapitainsky, Mahad, Masamune3210, Nick Craig-Wood, Nihaal Sangha, Niklas Hambüchen, Raymond Berger, r-ricci, Sawada Tsunayoshi, Tiago Boeing, Vladislav Vorobev) + * fs + * Use atomic types everywhere (Roberto Ricci) + * When `--max-transfer` limit is reached exit with code (10) (kapitainsky) + * Add rclone completion powershell - basic implementation only (Nick Craig-Wood) + * http servers: Allow CORS to be set with `--allow-origin` flag (yuudi) + * lib/rest: Remove unnecessary `nil` check (Eng Zer Jun) + * ncdu: Add keybinding to rescan filesystem (eNV25) + * rc + * Add `executeId` to job listings (yuudi) + * Add `core/du` to measure local disk usage (Nick Craig-Wood) + * Add `operations/settier` to API (Drew Stinnett) + * rclone test info: Add `--check-base32768` flag to check can store all base32768 characters (Nick Craig-Wood) + * rmdirs: Remove directories concurrently controlled by `--checkers` (Nick Craig-Wood) +* Bug Fixes + * accounting: Don't stop calculating average transfer speed until the operation is complete (Jacob Hands) + * fs: Fix `transferTime` not being set in JSON logs (Jacob Hands) + * fshttp: Fix `--bind 0.0.0.0` allowing IPv6 and `--bind ::0` allowing IPv4 (Nick Craig-Wood) + * operations: Fix overlapping check on case insensitive file systems (Nick Craig-Wood) + * serve dlna: Fix MIME type if backend can't identify it (Nick Craig-Wood) + * serve ftp: Fix race condition when using the auth proxy (Nick Craig-Wood) + * serve sftp: Fix hash calculations with `--vfs-cache-mode full` (Nick Craig-Wood) + * serve webdav: Fix error: Expecting fs.Object or fs.Directory, got `nil` (Nick Craig-Wood) + * sync: Fix lockup with `--cutoff-mode=soft` and `--max-duration` (Nick Craig-Wood) +* Mount + * fix: Mount parsing for linux (Anagh Kumar Baranwal) +* VFS + * Add `--vfs-cache-min-free-space` to control minimum free space on the disk containing the cache (Nick Craig-Wood) + * Added cache cleaner for directories to reduce memory usage (Anagh Kumar Baranwal) + * Update parent directory modtimes on vfs actions (David Pedersen) + * Keep virtual directory status accurate and reduce deadlock potential (Anagh Kumar Baranwal) + * Make sure struct field is aligned for atomic access (Roberto Ricci) +* Local + * Rmdir return an error if the path is not a dir (zjx20) +* Azure Blob + * Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood) + * Fix creation of directory markers (Nick Craig-Wood) + * Fix purging with directory markers (Nick Craig-Wood) +* B2 + * Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood) + * Fix rclone link when object path contains special characters (Alishan Ladhani) +* Box + * Add polling support (David Sze) + * Add `--box-impersonate` to impersonate a user ID (Nick Craig-Wood) + * Fix unhelpful decoding of error messages into decimal numbers (Nick Craig-Wood) +* Chunker + * Update documentation to mention issue with small files (Ricardo D'O. Albanus) +* Compress + * Fix ChangeNotify (Nick Craig-Wood) +* Drive + * Add `--drive-fast-list-bug-fix` to control ListR bug workaround (Nick Craig-Wood) +* Fichier + * Implement `DirMove` (Nick Craig-Wood) + * Fix error code parsing (alexia) +* FTP + * Add socks_proxy support for SOCKS5 proxies (Zach) + * Fix 425 "TLS session of data connection not resumed" errors (Nick Craig-Wood) +* Hdfs + * Retry "replication in progress" errors when uploading (Nick Craig-Wood) + * Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood) +* HTTP + * CORS should not be sent if not set (yuudi) + * Fix webdav OPTIONS response (yuudi) +* Opendrive + * Fix List on a just deleted and remade directory (Nick Craig-Wood) +* Oracleobjectstorage + * Use rclone's rate limiter in mutipart transfers (Manoj Ghosh) + * Implement `OpenChunkWriter` and multi-thread uploads (Manoj Ghosh) +* S3 + * Refactor multipart upload to use `OpenChunkWriter` and `ChunkWriter` (Vitor Gomes) + * Factor generic multipart upload into `lib/multipart` (Nick Craig-Wood) + * Fix purging of root directory with `--s3-directory-markers` (Nick Craig-Wood) + * Add `rclone backend set` command to update the running config (Nick Craig-Wood) + * Add `rclone backend restore-status` command (Nick Craig-Wood) +* SFTP + * Stop uploads re-using the same ssh connection to improve performance (Nick Craig-Wood) + * Add `--sftp-ssh` to specify an external ssh binary to use (Nick Craig-Wood) + * Add socks_proxy support for SOCKS5 proxies (Zach) + * Support dynamic `--sftp-path-override` (nielash) + * Fix spurious warning when using `--sftp-ssh` (Nick Craig-Wood) +* Smb + * Implement multi-threaded writes for copies to smb (Edwin Mackenzie-Owen) +* Storj + * Performance improvement for large file uploads (Kaloyan Raev) +* Swift + * Fix HEADing 0-length objects when `--swift-no-large-objects` set (Julian Lepinski) +* Union + * Add `:writback` to act as a simple cache (Nick Craig-Wood) +* WebDAV + * Nextcloud: fix segment violation in low-level retry (Paul) +* Zoho + * Remove Range requests workarounds to fix integration tests (Nick Craig-Wood) + +## v1.63.1 - 2023-07-17 + +[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.63.1) + +* Bug Fixes + * build: Fix macos builds for versions < 12 (Anagh Kumar Baranwal) + * dirtree: Fix performance with large directories of directories and `--fast-list` (Nick Craig-Wood) + * operations + * Fix deadlock when using `lsd`/`ls` with `--progress` (Nick Craig-Wood) + * Fix `.rclonelink` files not being converted back to symlinks (Nick Craig-Wood) + * doc fixes (Dean Attali, Mahad, Nick Craig-Wood, Sawada Tsunayoshi, Vladislav Vorobev) +* Local + * Fix partial directory read for corrupted filesystem (Nick Craig-Wood) +* Box + * Fix reconnect failing with HTTP 400 Bad Request (albertony) +* Smb + * Fix "Statfs failed: bucket or container name is needed" when mounting (Nick Craig-Wood) +* WebDAV + * Nextcloud: fix must use /dav/files/USER endpoint not /webdav error (Paul) + * Nextcloud chunking: add more guidance for the user to check the config (darix) + ## v1.63.0 - 2023-06-30 [See commits](https://github.com/rclone/rclone/compare/v1.62.0...v1.63.0) @@ -49035,7 +53452,6 @@ put them back in again.` >}} * Chris Nelson * Felix Bünemann * Atílio Antônio - * Roberto Ricci * Carlo Mion * Chris Lu * Vitor Arruda @@ -49232,33 +53648,80 @@ put them back in again.` >}} * Peter Fern * zzq * mac-15 + * Sawada Tsunayoshi <34431649+TsunayoshiSawada@users.noreply.github.com> + * Dean Attali + * Fjodor42 + * BakaWang + * Mahad <56235065+Mahad-lab@users.noreply.github.com> + * Vladislav Vorobev + * darix + * Benjamin <36415086+bbenjamin-sys@users.noreply.github.com> + * Chun-Hung Tseng + * Ricardo D'O. Albanus + * gabriel-suela + * Tiago Boeing + * Edwin Mackenzie-Owen + * Niklas Hambüchen + * yuudi + * Zach + * nielash <31582349+nielash@users.noreply.github.com> + * Julian Lepinski + * Raymond Berger + * Nihaal Sangha + * Masamune3210 <1053504+Masamune3210@users.noreply.github.com> + * James Braza + * antoinetran + * alexia + * nielash + * Vitor Gomes + * Jacob Hands + * hideo aoyama <100831251+boukendesho@users.noreply.github.com> + * Roberto Ricci + * Bjørn Smith + * Alishan Ladhani <8869764+aladh@users.noreply.github.com> + * zjx20 + * Oksana <142890647+oks-maytech@users.noreply.github.com> + * Volodymyr Kit + * David Pedersen + * Drew Stinnett -# Contact the rclone project # +# Contact the rclone project -## Forum ## +## Forum Forum for questions and general discussion: - * https://forum.rclone.org +- https://forum.rclone.org -## GitHub repository ## +## Business support + +For business support or sponsorship enquiries please see: + +- https://rclone.com/ +- sponsorship@rclone.com + +## GitHub repository The project's repository is located at: - * https://github.com/rclone/rclone +- https://github.com/rclone/rclone There you can file bug reports or contribute with pull requests. -## Twitter ## +## Twitter -You can also follow me on twitter for rclone announcements: +You can also follow Nick on twitter for rclone announcements: - * [@njcw](https://twitter.com/njcw) +- [@njcw](https://twitter.com/njcw) -## Email ## +## Email Or if all else fails or you want to ask something private or -confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com). -Please don't email me requests for help - those are better directed to -the forum. Thanks! +confidential + +- info@rclone.com + +Please don't email requests for help to this address - those are +better directed to the forum unless you'd like to sign up for business +support. diff --git a/MANUAL.txt b/MANUAL.txt index 54b10d5af..2d0f92187 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Jun 30, 2023 +Sep 11, 2023 Rclone syncs your files to cloud storage @@ -16,7 +16,7 @@ About rclone Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. -Over 40 cloud storage products support rclone including S3 object +Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. @@ -122,6 +122,7 @@ S3, that work out of the box.) - IDrive e2 - IONOS Cloud - Koofr +- Leviia Object Storage - Liara Object Storage - Mail.ru Cloud - Memset Memstore @@ -143,8 +144,10 @@ S3, that work out of the box.) - PikPak - premiumize.me - put.io +- Proton Drive - QingStor - Qiniu Cloud Object Storage (Kodo) +- Quatrix by Maytech - Rackspace Cloud Files - rsync.net - Scaleway @@ -156,6 +159,7 @@ S3, that work out of the box.) - SMB / CIFS - StackPath - Storj +- Synology - SugarSync - Tencent Cloud Object Storage (COS) - Uptobox @@ -204,6 +208,9 @@ See the usage docs for how to use rclone, or run rclone -h. Already installed rclone can be easily updated to the latest version using the rclone selfupdate command. +See the release signing docs for how to verify signatures on the +release. + Script installation To install rclone on Linux/macOS/BSD systems, run: @@ -468,6 +475,31 @@ Here are some commands tested on an Ubuntu 18.04.3 host: ls ~/data/mount kill %1 +Snap installation + +[Get it from the Snap Store] + +Make sure you have Snapd installed + + $ sudo snap install rclone + +Due to the strict confinement of Snap, rclone snap cannot acess real +/home/$USER/.config/rclone directory, default config path is as below. + +- Default config directory: + - /home/$USER/snap/rclone/current/.config/rclone + +Note: Due to the strict confinement of Snap, rclone mount feature is not +supported. + +If mounting is wanted, either install a precompiled binary or enable the +relevant option when installing from source. + +Note that this is controlled by community maintainer not the rclone +developers so it may be out of date. Its current version is as below. + +[rclone] + Source installation Make sure you have git and Go installed. Go version 1.17 or newer is @@ -789,7 +821,9 @@ See the following for detailed instructions for - PikPak - premiumize.me - put.io +- Proton Drive - QingStor +- Quatrix by Maytech - Seafile - SFTP - Sia @@ -854,6 +888,7 @@ SEE ALSO - rclone config delete - Delete an existing remote. - rclone config disconnect - Disconnects user from remote - rclone config dump - Dump the config file as JSON. +- rclone config edit - Enter an interactive configuration session. - rclone config file - Show path of configuration file in use. - rclone config password - Update password in an existing remote. - rclone config paths - Show paths used for configuration, cache, temp @@ -861,6 +896,8 @@ SEE ALSO - rclone config providers - List in JSON format all the providers and options. - rclone config reconnect - Re-authenticates user with remote. +- rclone config redacted - Print redacted (decrypted) config file, or + the redacted config for a single remote. - rclone config show - Print (decrypted) config file, or the config for a single remote. - rclone config touch - Ensure configuration file exists. @@ -935,6 +972,83 @@ Options --create-empty-src-dirs Create empty source dirs on destination after copy -h, --help help for copy +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -989,6 +1103,99 @@ Options --create-empty-src-dirs Create empty source dirs on destination after sync -h, --help help for sync +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Sync Options + +Flags just used for rclone sync. + + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1035,6 +1242,83 @@ Options --delete-empty-src-dirs Delete empty source dirs after move -h, --help help for move +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1081,6 +1365,48 @@ Options -h, --help help for delete --rmdirs rmdirs removes empty directories but leaves root intact +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1107,6 +1433,14 @@ Options -h, --help help for purge +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1123,6 +1457,14 @@ Options -h, --help help for mkdir +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1147,6 +1489,14 @@ Options -h, --help help for rmdir +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1221,6 +1571,46 @@ Options --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote +Check Options + +Flags used for rclone check. + + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1273,6 +1663,40 @@ Options -h, --help help for ls +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1336,6 +1760,40 @@ Options -h, --help help for lsd -R, --recursive Recurse into the listing +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1389,6 +1847,40 @@ Options -h, --help help for lsl +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1428,6 +1920,40 @@ Options -h, --help help for md5sum --output-file string Output hashsums to a file rather than the terminal +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1470,6 +1996,40 @@ Options -h, --help help for sha1sum --output-file string Output hashsums to a file rather than the terminal +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1504,6 +2064,40 @@ Options -h, --help help for size --json Format output as JSON +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -1580,6 +2174,14 @@ Options -h, --help help for cleanup +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1716,6 +2318,14 @@ Options --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive") -h, --help help for dedupe +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1858,6 +2468,14 @@ Options --json Always output in JSON format -o, --option stringArray Option in the form name=value or name +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -1884,17 +2502,90 @@ See full bisync description for details. Options - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --localtime Use local time in listings (default: UTC) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. + --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) See the global flags page for global options not listed here. @@ -1951,6 +2642,40 @@ Options --separator string Separator to use between objects when printing multiple files --tail int Only print the last N characters +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -2017,6 +2742,40 @@ Options --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -2043,6 +2802,8 @@ SEE ALSO - rclone - Show help for rclone commands, flags and backends. - rclone completion bash - Output bash completion script for rclone. - rclone completion fish - Output fish completion script for rclone. +- rclone completion powershell - Output powershell completion script + for rclone. - rclone completion zsh - Output zsh completion script for rclone. rclone completion bash @@ -2115,7 +2876,7 @@ SEE ALSO rclone completion powershell -Generate the autocompletion script for powershell +Output powershell completion script for rclone. Synopsis @@ -2128,19 +2889,20 @@ To load completions in your current shell session: To load completions for every new session, add the output of the above command to your powershell profile. - rclone completion powershell [flags] +If output_file is "-" or missing, then the output will be written to +stdout. + + rclone completion powershell [output_file] [flags] Options - -h, --help help for powershell - --no-descriptions disable completion descriptions + -h, --help help for powershell See the global flags page for global options not listed here. SEE ALSO -- rclone completion - Generate the autocompletion script for the - specified shell +- rclone completion - Output completion script for a given shell. rclone completion zsh @@ -2481,6 +3243,36 @@ SEE ALSO - rclone config - Enter an interactive configuration session. +rclone config redacted + +Print redacted (decrypted) config file, or the redacted config for a +single remote. + +Synopsis + +This prints a redacted copy of the config file, either the whole config +file or for a given remote. + +The config file will be redacted by replacing all passwords and other +sensitive info with XXX. + +This makes the config file suitable for posting online for support. + +It should be double checked before posting as the redaction may not be +perfect. + + rclone config redacted [] [flags] + +Options + + -h, --help help for redacted + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone config - Enter an interactive configuration session. + rclone config show Print (decrypted) config file, or the config for a single remote. @@ -2700,6 +3492,83 @@ Options -h, --help help for copyto +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -2739,6 +3608,14 @@ Options -p, --print-filename Print the resulting name from --auto-filename --stdout Write the output to stdout rather than a file +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -2816,6 +3693,46 @@ Options --missing-on-src string Report all files missing from the source to this file --one-way Check one way only, source files must exist on remote +Check Options + +Flags used for rclone check. + + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -2872,6 +3789,14 @@ Options -h, --help help for deletefile +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -3081,6 +4006,40 @@ Options -h, --help help for hashsum --output-file string Output hashsums to a file rather than the terminal +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -3290,6 +4249,40 @@ Options -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -3415,6 +4408,40 @@ Options -R, --recursive Recurse into the listing --stat Just return the info for the pointed to file +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -3952,12 +4979,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -3973,14 +5001,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -4245,6 +5273,7 @@ Options --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -4260,6 +5289,33 @@ Options --volname string Set the volume name (supported on Windows and OSX only) --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -4309,6 +5365,83 @@ Options -h, --help help for moveto +Copy Options + +Flags for anything which can Copy a file. + + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don't check the destination, copy regardless + --no-traverse Don't traverse destination file system on copy + --no-update-modtime Don't update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination + +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -4349,6 +5482,7 @@ toggle the help on and off. The supported keys are: y copy current path to clipboard Y display current path ^L refresh screen (fix screen corruption) + r recalculate file sizes ? to toggle help on and off q/ESC/^c to quit @@ -4383,6 +5517,40 @@ Options -h, --help help for ncdu +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -4547,6 +5715,14 @@ Options -h, --help help for rcat --size int File size hint to preallocate (default -1) +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -4702,6 +5878,39 @@ Options -h, --help help for rcd +RC Options + +Flags to control the Remote Control API. + + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") + --rc-no-auth Don't require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default "dlPL2MqE") + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don't open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui + See the global flags page for global options not listed here. SEE ALSO @@ -4726,7 +5935,10 @@ This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs). -To delete a path and any objects in it, use purge command. +This will delete --checkers directories concurrently so if you have +thousands of empty directories consider increasing this number. + +To delete a path and any objects in it, use the purge command. rclone rmdirs remote:path [flags] @@ -4735,6 +5947,14 @@ Options -h, --help help for rmdirs --leave-root Do not remove root directory if empty +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + See the global flags page for global options not listed here. SEE ALSO @@ -4749,7 +5969,8 @@ Synopsis This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and -cryptographically signed signature. +cryptographically signed signature; see the release signing docs for +details. If used without flags (or with implied --stable flag), this command will install the latest stable release. However, some issues may be fixed (or @@ -4783,9 +6004,9 @@ correct for your OS) to update these too. This command with the default --package zip will update only the rclone executable so the local manual may become inaccurate after it. -The rclone mount command (https://rclone.org/commands/rclone_mount/) may -or may not support extended FUSE options depending on the build and OS. -selfupdate will refuse to update if the capability would be discarded. +The rclone mount command may or may not support extended FUSE options +depending on the build and OS. selfupdate will refuse to update if the +capability would be discarded. Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success. @@ -4947,12 +6168,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -4968,14 +6190,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -5227,6 +6449,7 @@ Options --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -5240,6 +6463,33 @@ Options --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -5363,12 +6613,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -5384,14 +6635,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -5661,6 +6912,7 @@ Options --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -5676,6 +6928,33 @@ Options --volname string Set the volume name (supported on Windows and OSX only) --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -5782,12 +7061,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -5803,14 +7083,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -6136,6 +7416,7 @@ Options --user string User name for authentication (default "anonymous") --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -6149,6 +7430,33 @@ Options --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -6372,12 +7680,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -6393,14 +7702,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -6705,6 +8014,7 @@ that rclone supports. Options --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -6734,6 +8044,7 @@ Options --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -6747,6 +8058,33 @@ Options --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -6918,6 +8256,7 @@ Use --salt to change the password hashing salt from the default. Options --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --append-only Disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root --cache-objects Cache listed objects (default true) @@ -7074,12 +8413,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -7095,14 +8435,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -7428,6 +8768,7 @@ Options --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -7441,6 +8782,33 @@ Options --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -7694,12 +9062,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -7715,14 +9084,14 @@ and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using --vfs-cache-max-size note that the cache may exceed this size -for two reasons. Firstly because it is only checked every ---vfs-cache-poll-interval. Secondly because open files cannot be evicted -from the cache. When --vfs-cache-max-size is exceeded, rclone will -attempt to evict the least accessed files from the cache first. rclone -will start with files that haven't been accessed for the longest. This -cache flushing strategy is efficient and more relevant files are likely -to remain cached. +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. The --vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will @@ -8027,6 +9396,7 @@ that rclone supports. Options --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -8058,6 +9428,7 @@ Options --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -8071,6 +9442,33 @@ Options --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + See the global flags page for global options not listed here. SEE ALSO @@ -8214,6 +9612,7 @@ NB this can create undeletable files and other hazards - use with care Options --all Run all tests + --check-base32768 Check can store all possible base32768 characters --check-control Check control characters --check-length Check max filename length --check-normalization Check UTF-8 Normalization @@ -8331,6 +9730,48 @@ Options -R, --recursive Recursively touch all files -t, --timestamp string Use specified time instead of the current time of day +Important Options + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -8393,6 +9834,40 @@ Options -U, --unsorted Leave files unsorted --version Sort files alphanumerically by version +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing Options + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + See the global flags page for global options not listed here. SEE ALSO @@ -8605,6 +10080,11 @@ them. This is mostly a problem on Windows, where the console traditionally uses a non-Unicode character set - defined by the so-called "code page". +Do not use single character names on Windows as it creates ambiguity +with Windows drives' names, e.g.: remote called C is indistinguishable +from C drive. Rclone will always assume that single letter name refers +to a drive. + Quoting and the shell When you are typing commands to your computer you are using something @@ -8909,6 +10389,9 @@ address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. +You can use --bind 0.0.0.0 to force rclone to use IPv4 addresses and +--bind ::0 to force rclone to use IPv6 addresses. + --bwlimit=BANDWIDTH_SPEC This option controls the bandwidth limit. For example @@ -9716,24 +11199,38 @@ happen. --max-duration=TIME -Rclone will stop scheduling new transfers when it has run for the -duration specified. +Rclone will stop transferring when it has run for the duration +specified. Defaults to off. -Defaults to off. +When the limit is reached all transfers will stop immediately. Use +--cutoff-mode to modify this behaviour. -When the limit is reached any existing transfers will complete. - -Rclone won't exit with an error if the transfer limit is reached. +Rclone will exit with exit code 10 if the duration limit is reached. --max-transfer=SIZE Rclone will stop transferring when it has reached the size specified. Defaults to off. -When the limit is reached all transfers will stop immediately. +When the limit is reached all transfers will stop immediately. Use +--cutoff-mode to modify this behaviour. Rclone will exit with exit code 8 if the transfer limit is reached. +--cutoff-mode=hard|soft|cautious + +This modifies the behavior of --max-transfer and --max-duration Defaults +to --cutoff-mode=hard. + +Specifying --cutoff-mode=hard will stop transferring immediately when +Rclone reaches the limit. + +Specifying --cutoff-mode=soft will stop starting new transfers when +Rclone reaches the limit. + +Specifying --cutoff-mode=cautious will try to prevent Rclone from +reaching the limit. Only applicable for --max-transfer + -M, --metadata Setting this flag enables rclone to copy the metadata from the source to @@ -9745,20 +11242,6 @@ xattr etc. See the #metadata for more info. Add metadata key = value when uploading. This can be repeated as many times as required. See the #metadata for more info. ---cutoff-mode=hard|soft|cautious - -This modifies the behavior of --max-transfer Defaults to ---cutoff-mode=hard. - -Specifying --cutoff-mode=hard will stop transferring immediately when -Rclone reaches the limit. - -Specifying --cutoff-mode=soft will stop starting new transfers when -Rclone reaches the limit. - -Specifying --cutoff-mode=cautious will try to prevent Rclone from -reaching the limit. - --modify-window=TIME When checking whether a file has been modified, this is the maximum @@ -9773,12 +11256,12 @@ This command line flag allows you to override that computed default. --multi-thread-write-buffer-size=SIZE -When downloading with multiple threads, rclone will buffer SIZE bytes in -memory before writing to disk for each thread. +When transferring with multiple threads, rclone will buffer SIZE bytes +in memory before writing to disk for each thread. This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so -if you see downloads being limited by disk write speed, you might want +if you see transfers being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful. @@ -9790,55 +11273,61 @@ As a final hint, size is not the only factor: block size (or similar concept) can have an impact. In one case, we observed that exact multiples of 16k performed much better than other values. +--multi-thread-chunk-size=SizeSuffix + +Normally the chunk size for multi thread transfers is set by the +backend. However some backends such as local and smb (which implement +OpenWriterAt but not OpenChunkWriter) don't have a natural chunk size. + +In this case the value of this option is used (default 64Mi). + --multi-thread-cutoff=SIZE -When downloading files to the local backend above this size, rclone will -use multiple threads to download the file (default 250M). +When transferring files above SIZE to capable backends, rclone will use +multiple threads to transfer the file (default 256M). -Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on -unix or NTSetInformationFile on Windows both of which takes no time) -then each thread writes directly into the file at the correct place. -This means that rclone won't create fragmented or sparse files and there -won't be any assembly time at the end of the transfer. +Capable backends are marked in the overview as MultithreadUpload. (They +need to implement either the OpenWriterAt or OpenChunkedWriter internal +interfaces). These include include, local, s3, azureblob, b2, +oracleobjectstorage and smb at the time of writing. -The number of threads used to download is controlled by +On the local disk, rclone preallocates the file (using +fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on +Windows both of which takes no time) then each thread writes directly +into the file at the correct place. This means that rclone won't create +fragmented or sparse files and there won't be any assembly time at the +end of the transfer. + +The number of threads used to transfer is controlled by --multi-thread-streams. Use -vv if you wish to see info about the threads. This will work with the sync/copy/move commands and friends -copyto/moveto. Multi thread downloads will be used with rclone mount and +copyto/moveto. Multi thread transfers will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above. -NB that this only works for a local destination but will work with any -source. +NB that this only works with supported backends as the destination but +will work with any backend as the source. -NB that multi thread copies are disabled for local to local copies as +NB that multi-thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly. -NB on Windows using multi-thread downloads will cause the resulting -files to be sparse. Use --local-no-sparse to disable sparse files (which -may cause long delays at the start of downloads) or disable multi-thread -downloads with --multi-thread-streams 0 +NB on Windows using multi-thread transfers to the local disk will cause +the resulting files to be sparse. Use --local-no-sparse to disable +sparse files (which may cause long delays at the start of transfers) or +disable multi-thread transfers with --multi-thread-streams 0 --multi-thread-streams=N -When using multi thread downloads (see above --multi-thread-cutoff) this -sets the maximum number of streams to use. Set to 0 to disable multi -thread downloads (Default 4). +When using multi thread transfers (see above --multi-thread-cutoff) this +sets the number of streams to use. Set to 0 to disable multi thread +transfers (Default 4). -Exactly how many streams rclone uses for the download depends on the -size of the file. To calculate the number of download streams Rclone -divides the size of the file by the --multi-thread-cutoff and rounds up, -up to the maximum set with --multi-thread-streams. - -So if --multi-thread-cutoff 250M and --multi-thread-streams 4 are in -effect (the defaults): - -- 0..250 MiB files will be downloaded with 1 stream -- 250..500 MiB files will be downloaded with 2 streams -- 500..750 MiB files will be downloaded with 3 streams -- 750+ MiB files will be downloaded with 4 streams +If the backend has a --backend-upload-concurrency setting (eg +--s3-upload-concurrency) then this setting will be used as the number of +transfers instead if it is larger than the value of +--multi-thread-streams or --multi-thread-streams isn't set. --no-check-dest @@ -10803,6 +12292,7 @@ List of exit codes suspended) (Fatal errors) - 8 - Transfer exceeded - limit set by --max-transfer reached - 9 - Operation successful, but no files transferred +- 10 - Duration exceeded - limit set by --max-duration reached Environment Variables @@ -10841,7 +12331,9 @@ for each backend. To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and -make it all uppercase. +make it all uppercase. Note one implication here is the remote's name +must be convertible into a valid environment variable name, so it can +only contain letters, digits, or the _ (underscore) character. For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables): @@ -11055,8 +12547,8 @@ E.g. rclone copy "remote:dir*.jpg" /path/to/dir does not have a filter effect. rclone copy remote:dir /path/to/dir --include "*.jpg" does. Important Avoid mixing any two of --include..., --exclude... or ---filter... flags in an rclone command. The results may not be what you -expect. Instead use a --filter... flag. +--filter... flags in an rclone command. The results might not be what +you expect. Instead use a --filter... flag. Patterns for matching path/file names @@ -11115,7 +12607,7 @@ beginning of the path/file. - doesn't match "afile.jpg" - doesn't match "directory/file.jpg" -The top level of the remote may not be the top level of the drive. +The top level of the remote might not be the top level of the drive. E.g. for a Microsoft Windows local directory structure @@ -11395,7 +12887,7 @@ all files on remote: excluding those in root directory dir and sub directories. E.g. on Microsoft Windows rclone ls remote: --exclude "*\[{JP,KR,HK}\]*" -lists the files in remote: with [JP] or [KR] or [HK] in their name. +lists the files in remote: without [JP] or [KR] or [HK] in their name. Quotes prevent the shell from interpreting the \ characters.\ characters escape the [ and ] so an rclone filter treats them literally rather than as a character-range. The { and } define an rclone pattern list. For @@ -12227,7 +13719,7 @@ you would pass this parameter in your JSON blob. If using rclone rc this could be passed as - rclone rc operations/sync ... _config='{"CheckSum": true}' + rclone rc sync/sync ... _config='{"CheckSum": true}' Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables. @@ -12636,6 +14128,26 @@ Returns: Authentication is required for this call. +core/du: Returns disk usage of a locally attached disk. + +This returns the disk usage for the local directory passed in as dir. + +If the directory is not passed in, it defaults to the directory pointed +to by --cache-dir. + +- dir - string (optional) + +Returns: + + { + "dir": "/", + "info": { + "Available": 361769115648, + "Free": 361785892864, + "Total": 982141468672 + } + } + core/gc: Runs a garbage collection. This tells the go runtime to do a garbage collection run. It isn't @@ -12713,6 +14225,10 @@ Returns the following values: "lastError": last error string, "renames" : number of files renamed, "retryError": boolean showing whether there has been at least one non-NoRetryError, + "serverSideCopies": number of server side copies done, + "serverSideCopyBytes": number bytes server side copied, + "serverSideMoves": number of server side moves done, + "serverSideMoveBytes": number bytes server side moved, "speed": average speed in bytes per second since start of the group, "totalBytes": total number of bytes in the group, "totalChecks": total number of checks in the group, @@ -12924,7 +14440,8 @@ Parameters: None. Results: -- jobids - array of integer job ids. +- executeId - string id of rclone executing (change after restart) +- jobids - array of integer job ids (starting at 1 on each restart) job/status: Reads the status of the job ID @@ -13341,6 +14858,27 @@ See the rmdirs command for more information on the above. Authentication is required for this call. +operations/settier: Changes storage tier or class on all files in the path + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" + +See the settier command for more information on the above. + +Authentication is required for this call. + +operations/settierfile: Changes storage tier or class on the single file pointed to + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" + +See the settierfile command for more information on the above. + +Authentication is required for this call. + operations/size: Count the number of bytes and files in remote This takes the following parameters: @@ -13584,12 +15122,17 @@ This takes the following parameters - checkFilename - file name for checkAccess (default: RCLONE_TEST) - maxDelete - abort sync if percentage of deleted files is above this threshold (default: 50) -- force - maxDelete safety check and run the sync +- force - Bypass maxDelete safety check and run the sync - checkSync - true by default, false disables comparison of final listings, only will skip sync, only compare listings from the last run +- createEmptySrcDirs - Sync creation and deletion of empty + directories. (Not compatible with --remove-empty-dirs) - removeEmptyDirs - remove empty directories at the final cleanup step - filtersFile - read filtering patterns from a file +- ignoreListingChecksum - Do not use checksums for listings +- resilient - Allow future runs to retry after certain less-serious + errors, instead of requiring resync. Use at your own risk! - workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) - noCleanup - retain working files @@ -13986,7 +15529,9 @@ Here is an overview of the major features of each cloud storage system. PikPak MD5 R No No R - premiumize.me - - Yes No R - put.io CRC-32 R/W No Yes R - + Proton Drive SHA1 R/W No No R - QingStor MD5 - ⁹ No No R/W - + Quatrix by Maytech - R/W No No - - Seafile - - No No - - SFTP MD5, SHA1 ² R/W Depends No - - Sia - - No No - - @@ -14007,7 +15552,7 @@ Notes ² SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. -³ WebDAV supports hashes when used with Fastmail Files. Owncloud and +³ WebDAV supports hashes when used with Fastmail Files, Owncloud and Nextcloud only. ⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and @@ -14437,51 +15982,110 @@ Optional Features All rclone remotes support a base command set. Other features depend upon backend-specific capabilities. - Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir - ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- ---------- - 1Fichier No Yes Yes No No No No Yes No Yes - Akamai Netstorage Yes No No No No Yes Yes No No Yes - Amazon Drive Yes No Yes Yes No No No No No Yes - Amazon S3 (or S3 compatible) No Yes No No Yes Yes Yes Yes No No - Backblaze B2 No Yes No No Yes Yes Yes Yes No No - Box Yes Yes Yes Yes Yes ‡‡ No Yes Yes Yes Yes - Citrix ShareFile Yes Yes Yes Yes No No No No No Yes - Dropbox Yes Yes Yes Yes No No Yes Yes Yes Yes - Enterprise File Fabric Yes Yes Yes Yes Yes No No No No Yes - FTP No No Yes Yes No No Yes No No Yes - Google Cloud Storage Yes Yes No No No Yes Yes No No No - Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes - Google Photos No No No No No No No No No No - HDFS Yes No Yes Yes No No Yes No Yes Yes - HiDrive Yes Yes Yes Yes No No Yes No No Yes - HTTP No No No No No No No No No Yes - Internet Archive No Yes No No Yes Yes No Yes Yes No - Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes - Koofr Yes Yes Yes Yes No No Yes Yes Yes Yes - Mail.ru Cloud Yes Yes Yes Yes Yes No No Yes Yes Yes - Mega Yes No Yes Yes Yes No No Yes Yes Yes - Memory No Yes No No No Yes Yes No No No - Microsoft Azure Blob Storage Yes Yes No No No Yes Yes No No No - Microsoft OneDrive Yes Yes Yes Yes Yes No No Yes Yes Yes - OpenDrive Yes Yes Yes Yes No No No No No Yes - OpenStack Swift Yes † Yes No No No Yes Yes No Yes No - Oracle Object Storage No Yes No No Yes Yes Yes No No No - pCloud Yes Yes Yes Yes Yes No No Yes Yes Yes - PikPak Yes Yes Yes Yes Yes No No Yes Yes Yes - premiumize.me Yes No Yes Yes No No No Yes Yes Yes - put.io Yes No Yes Yes Yes No Yes No Yes Yes - QingStor No Yes No No Yes Yes No No No No - Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes - SFTP No No Yes Yes No No Yes No Yes Yes - Sia No No No No No No Yes No No Yes - SMB No No Yes Yes No No Yes No No Yes - SugarSync Yes Yes Yes Yes No No Yes Yes No Yes - Storj Yes ☨ Yes Yes No No Yes Yes Yes No No - Uptobox No Yes Yes Yes No No No No No No - WebDAV Yes Yes Yes Yes No No Yes ‡ No Yes Yes - Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes - Zoho WorkDrive Yes Yes Yes Yes No No No No Yes Yes - The local filesystem Yes No Yes Yes No No Yes No Yes Yes + ------------------------------------------------------------------------------------------------------------------------------------- + Name Purge Copy Move DirMove CleanUp ListR StreamUpload MultithreadUpload LinkSharing About EmptyDir + --------------- ------- ------ ------ --------- --------- ------- -------------- ------------------- ------------- ------- ---------- + 1Fichier No Yes Yes No No No No No Yes No Yes + + Akamai Yes No No No No Yes Yes No No No Yes + Netstorage + + Amazon Drive Yes No Yes Yes No No No No No No Yes + + Amazon S3 (or No Yes No No Yes Yes Yes Yes Yes No No + S3 compatible) + + Backblaze B2 No Yes No No Yes Yes Yes Yes Yes No No + + Box Yes Yes Yes Yes Yes ‡‡ No Yes No Yes Yes Yes + + Citrix Yes Yes Yes Yes No No No No No No Yes + ShareFile + + Dropbox Yes Yes Yes Yes No No Yes No Yes Yes Yes + + Enterprise File Yes Yes Yes Yes Yes No No No No No Yes + Fabric + + FTP No No Yes Yes No No Yes No No No Yes + + Google Cloud Yes Yes No No No Yes Yes No No No No + Storage + + Google Drive Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes + + Google Photos No No No No No No No No No No No + + HDFS Yes No Yes Yes No No Yes No No Yes Yes + + HiDrive Yes Yes Yes Yes No No Yes No No No Yes + + HTTP No No No No No No No No No No Yes + + Internet No Yes No No Yes Yes No No Yes Yes No + Archive + + Jottacloud Yes Yes Yes Yes Yes Yes No No Yes Yes Yes + + Koofr Yes Yes Yes Yes No No Yes No Yes Yes Yes + + Mail.ru Cloud Yes Yes Yes Yes Yes No No No Yes Yes Yes + + Mega Yes No Yes Yes Yes No No No Yes Yes Yes + + Memory No Yes No No No Yes Yes No No No No + + Microsoft Azure Yes Yes No No No Yes Yes Yes No No No + Blob Storage + + Microsoft Yes Yes Yes Yes Yes No No No Yes Yes Yes + OneDrive + + OpenDrive Yes Yes Yes Yes No No No No No No Yes + + OpenStack Swift Yes † Yes No No No Yes Yes No No Yes No + + Oracle Object No Yes No No Yes Yes Yes No No No No + Storage + + pCloud Yes Yes Yes Yes Yes No No No Yes Yes Yes + + PikPak Yes Yes Yes Yes Yes No No No Yes Yes Yes + + premiumize.me Yes No Yes Yes No No No No Yes Yes Yes + + put.io Yes No Yes Yes Yes No Yes No No Yes Yes + + Proton Drive Yes No Yes Yes Yes No No No No Yes Yes + + QingStor No Yes No No Yes Yes No No No No No + + Quatrix by Yes Yes Yes Yes No No No No No Yes Yes + Maytech + + Seafile Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes + + SFTP No No Yes Yes No No Yes No No Yes Yes + + Sia No No No No No No Yes No No No Yes + + SMB No No Yes Yes No No Yes Yes No No Yes + + SugarSync Yes Yes Yes Yes No No Yes No Yes No Yes + + Storj Yes ☨ Yes Yes No No Yes Yes No Yes No No + + Uptobox No Yes Yes Yes No No No No No No No + + WebDAV Yes Yes Yes Yes No No Yes ‡ No No Yes Yes + + Yandex Disk Yes Yes Yes Yes Yes No Yes No Yes Yes Yes + + Zoho WorkDrive Yes Yes Yes Yes No No No No No Yes Yes + + The local Yes No Yes Yes No No Yes Yes No Yes Yes + filesystem + ------------------------------------------------------------------------------------------------------------------------------------- Purge @@ -14544,6 +16148,12 @@ Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat. +MultithreadUpload + +Some remotes allow transfers to the remote to be sent as chunks in +parallel. If this is supported then rclone will use multi-thread copying +to transfer files much faster. + LinkSharing Sets the necessary permissions on a file or folder and prints a link @@ -14571,180 +16181,252 @@ Object/Bucket-based remotes do not support this. Global Flags This describes the global flags available to every rclone command split -into two groups, non backend and backend flags. +into groups. -Non Backend Flags +Copy -These flags are available for every command. +Flags for anything which can Copy a file. - --ask-password Allow prompt for password for encrypted configuration (default true) - --auto-confirm If enabled, do not request console confirmation - --backup-dir string Make backups into hierarchy based in DIR - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name - --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) - --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --ca-cert stringArray CA certificate used to verify servers - --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") --check-first Do all the checks before starting transfers - --checkers int Number of checkers to run in parallel (default 8) - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size - --client-cert string Client SSL certificate (PEM) for mutual TLS auth - --client-key string Client SSL private key (PEM) for mutual TLS auth - --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison - --config string Config file (default "$HOME/.config/rclone/rclone.conf") - --contimeout Duration Connect timeout (default 1m0s) --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination - --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") - --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features (use --disable help to see a list) - --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. - --disable-http2 Disable HTTP/2 in the global transport - -n, --dry-run Do a trial run with no permanent changes - --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 - --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP headers - may contain sensitive info - --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) - --exclude-if-present stringArray Exclude directories if filename is present - --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) - --fast-list Use recursive list if available; uses more memory but fewer transactions - --files-from stringArray Read list of source-file names from file (use - to read from stdin) - --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) - -f, --filter stringArray Add a file filtering rule - --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) - --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) - --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) - --header stringArray Set HTTP header for all transactions - --header-download stringArray Set HTTP header for download transactions - --header-upload stringArray Set HTTP header for upload transactions - --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi - --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums - --ignore-errors Delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files, fail if existing files have been modified - --include stringArray Include files matching pattern - --include-from stringArray Read file include patterns from file (use - to read from stdin) --inplace Download directly to destination file instead of atomic download to temp/rename - -i, --interactive Enable interactive mode - --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --log-systemd Activate systemd integration for the logger - --low-level-retries int Number of low level retries to do (default 10) - --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) - --max-depth int If set limits the recursion depth to this (default -1) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) - --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) - --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) - --memprofile string Write memory profile to file -M, --metadata If set, preserve metadata when copying objects - --metadata-exclude stringArray Exclude metadatas matching pattern - --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) - --metadata-filter stringArray Add a metadata filtering rule - --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) - --metadata-include stringArray Include metadatas matching pattern - --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) - --metadata-set stringArray Add metadata key=value when uploading - --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) - --no-check-certificate Do not verify the server SSL certificate (insecure) --no-check-dest Don't check the destination, copy regardless - --no-console Hide console window (supported on Windows only) - --no-gzip-encoding Don't set Accept-Encoding: gzip --no-traverse Don't traverse destination file system on copy - --no-unicode-normalization Don't normalize unicode characters in filenames --no-update-modtime Don't update destination mod-time if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' - --password-command SpaceSepList Command for supplying password for encrypted configuration - -P, --progress Show progress during transfer - --progress-terminal-title Show progress on the terminal title (requires -P/--progress) - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server - --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) - --rc-allow-origin string Set the allowed origin for CORS - --rc-baseurl string Prefix for URLs - leave blank for root - --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-enable-metrics Enable prometheus metrics on /metrics - --rc-files string Path to local files to serve on the HTTP server - --rc-htpasswd string A htpasswd file - if not provided no authentication is done - --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) - --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) - --rc-key string TLS PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") - --rc-no-auth Don't require auth for certain methods - --rc-pass string Password for authentication - --rc-realm string Realm for authentication - --rc-salt string Password hashing salt (default "dlPL2MqE") - --rc-serve Enable the serving of remote objects - --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) - --rc-template string User-specified template - --rc-user string User name for authentication - --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") - --rc-web-gui Launch WebGUI on localhost - --rc-web-gui-force-update Force update to latest version of web gui - --rc-web-gui-no-open-browser Don't open the browser automatically - --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --size-only Skip based on size only, not mod-time or checksum - --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line - --stats-one-line-date Enable --stats-one-line and add current date/time prefix - --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format - --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) - --suffix string Suffix to add to changed files - --suffix-keep-extension Preserve the extension when using --suffix - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON") - --temp-dir string Directory rclone will use for temporary files (default "/tmp") - --timeout Duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this - --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) - --track-renames When synchronizing, track file renames and do a server-side move if possible - --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") - --transfers int Number of file transfers to run in parallel (default 4) -u, --update Skip files that are newer on the destination - --use-cookies Enable session cookiejar - --use-json-log Use json log format - --use-mmap Use mmap allocator (see docs) - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.63.0") - -v, --verbose count Print lots more stuff (repeat for more) -Backend Flags +Sync -These flags are available for every command. They control the backends -and may be set in the config file. +Flags just used for rclone sync. + + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") + +Important + +Important flags useful for most commands. + + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) + +Check + +Flags used for rclone check. + + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + +Networking + +General networking and HTTP stuff. + + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name + --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --ca-cert stringArray CA certificate used to verify servers + --client-cert string Client SSL certificate (PEM) for mutual TLS auth + --client-key string Client SSL private key (PEM) for mutual TLS auth + --contimeout Duration Connect timeout (default 1m0s) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. + --disable-http2 Disable HTTP/2 in the global transport + --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 + --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) + --header stringArray Set HTTP header for all transactions + --header-download stringArray Set HTTP header for download transactions + --header-upload stringArray Set HTTP header for upload transactions + --no-check-certificate Do not verify the server SSL certificate (insecure) + --no-gzip-encoding Don't set Accept-Encoding: gzip + --timeout Duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this + --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) + --use-cookies Enable session cookiejar + --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") + +Performance + +Flags helpful for increasing performance. + + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) + --checkers int Number of checkers to run in parallel (default 8) + --transfers int Number of file transfers to run in parallel (default 4) + +Config + +General configuration of rclone. + + --ask-password Allow prompt for password for encrypted configuration (default true) + --auto-confirm If enabled, do not request console confirmation + --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") + --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO") + --config string Config file (default "$HOME/.config/rclone/rclone.conf") + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --disable string Disable a comma separated list of features (use --disable help to see a list) + -n, --dry-run Do a trial run with no permanent changes + --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts + --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) + --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) + --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi + -i, --interactive Enable interactive mode + --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) + --low-level-retries int Number of low level retries to do (default 10) + --no-console Hide console window (supported on Windows only) + --no-unicode-normalization Don't normalize unicode characters in filenames + --password-command SpaceSepList Command for supplying password for encrypted configuration + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --temp-dir string Directory rclone will use for temporary files (default "/tmp") + --use-mmap Use mmap allocator (see docs) + --use-server-modtime Use server modified time instead of object metadata + +Debugging + +Flags for developers. + + --cpuprofile string Write cpu profile to file + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP headers - may contain sensitive info + --memprofile string Write memory profile to file + +Filter + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +Listing + +Flags for listing directories. + + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions + +Logging + +Logging and statistics. + + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --log-systemd Activate systemd integration for the logger + --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) + -P, --progress Show progress during transfer + --progress-terminal-title Show progress on the terminal title (requires -P/--progress) + -q, --quiet Print as little stuff as possible + --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line + --stats-one-line-date Enable --stats-one-line and add current date/time prefix + --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format + --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON") + --use-json-log Use json log format + -v, --verbose count Print lots more stuff (repeat for more) + +Metadata + +Flags to control metadata. + + -M, --metadata If set, preserve metadata when copying objects + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --metadata-set stringArray Add metadata key=value when uploading + +RC + +Flags to control the Remote Control API. + + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") + --rc-no-auth Don't require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default "dlPL2MqE") + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don't open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui + +Backend + +Backend only flags. These can be set in the config file also. --acd-auth-url string Auth server URL --acd-client-id string OAuth Client Id @@ -14771,8 +16453,6 @@ and may be set in the config file. --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) - --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any @@ -14798,9 +16478,8 @@ and may be set in the config file. --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key - --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging + --b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings @@ -14812,6 +16491,7 @@ and may be set in the config file. --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) + --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point @@ -14871,6 +16551,7 @@ and may be set in the config file. --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") + --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs @@ -14946,6 +16627,7 @@ and may be set in the config file. --ftp-pass string FTP password (obscured) --ftp-port int FTP port number (default 21) --ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s) + --ftp-socks-proxy string Socks 5 proxy host --ftp-tls Use Implicit FTPS (FTP over TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) --ftp-user string FTP username (default "$USER") @@ -15014,10 +16696,15 @@ and may be set in the config file. --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") --internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) + --jottacloud-auth-url string Auth server URL + --jottacloud-client-id string OAuth Client Id + --jottacloud-client-secret string OAuth Client Secret --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them + --jottacloud-token string OAuth Access Token as a JSON blob + --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) @@ -15038,13 +16725,18 @@ and may be set in the config file. --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) + --mailru-auth-url string Auth server URL --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) + --mailru-client-id string OAuth Client Id + --mailru-client-secret string OAuth Client Secret --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi) + --mailru-token string OAuth Access Token as a JSON blob + --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -15078,6 +16770,7 @@ and may be set in the config file. --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url + --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default "~/.oci/config") @@ -15087,7 +16780,8 @@ and may be set in the config file. --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API - --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery + --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery + --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default "env_auth") @@ -15126,8 +16820,27 @@ and may be set in the config file. --pikpak-trashed-only Only show files that are in the trash --pikpak-use-trash Send files to the trash instead of deleting permanently (default true) --pikpak-user string Pikpak username + --premiumizeme-auth-url string Auth server URL + --premiumizeme-client-id string OAuth Client Id + --premiumizeme-client-secret string OAuth Client Secret --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --premiumizeme-token string OAuth Access Token as a JSON blob + --premiumizeme-token-url string Token server url + --protondrive-2fa string The 2FA code + --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) + --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) + --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) + --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-password string The password of your proton account (obscured) + --protondrive-replace-existing-draft Create a new revision when filename conflict is detected + --protondrive-username string The username of your proton account + --putio-auth-url string Auth server URL + --putio-client-id string OAuth Client Id + --putio-client-secret string OAuth Client Secret --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --putio-token string OAuth Access Token as a JSON blob + --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) @@ -15138,6 +16851,13 @@ and may be set in the config file. --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to + --quatrix-api-key string API key for accessing Quatrix account + --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") + --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --quatrix-hard-delete Delete files permanently rather than putting them into the trash + --quatrix-host string Host name of Quatrix account + --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) + --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets @@ -15158,8 +16878,6 @@ and may be set in the config file. --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) - --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it --s3-no-head If set, don't HEAD uploaded objects to check integrity @@ -15225,14 +16943,21 @@ and may be set in the config file. --sftp-sha1sum-command string The command used to read sha1 hashes --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files + --sftp-socks-proxy string Socks 5 proxy host + --sftp-ssh SpaceSepList Path and arguments to external ssh binary --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods --sftp-user string SSH username (default "$USER") + --sharefile-auth-url string Auth server URL --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) + --sharefile-client-id string OAuth Client Id + --sharefile-client-secret string OAuth Client Secret --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder + --sharefile-token string OAuth Access Token as a JSON blob + --sharefile-token-url string Token server url --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") @@ -15908,10 +17633,16 @@ Command line syntax If exceeded, the bisync run will abort. (default: 50%) --force Bypass `--max-delete` safety check and run the sync. Consider using with `--verbose` + --create-empty-src-dirs Sync creation and deletion of empty directories. + (Not compatible with --remove-empty-dirs) --remove-empty-dirs Remove empty directories at the final cleanup step. -1, --resync Performs the resync run. Warning: Path1 files may overwrite Path2 versions. Consider using `--verbose` or `--dry-run` first. + --ignore-listing-checksum Do not use checksums for listings + (add --ignore-checksum to additionally skip post-copy checksum checks) + --resilient Allow future runs to retry after certain less-serious errors, + instead of requiring --resync. Use at your own risk! --localtime Use local time in listings (default: UTC) --no-cleanup Retain working files (useful for troubleshooting and testing). --workdir PATH Use custom working directory (useful for testing). @@ -15936,8 +17667,8 @@ with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below). Path1 and Path2 are treated equally, in that neither has priority for -file changes, and access efficiency does not change whether a remote is -on Path1 or Path2. +file changes (except during --resync), and access efficiency does not +change whether a remote is on Path1 or Path2. The listings in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1 and Path2 arguments @@ -15945,9 +17676,10 @@ so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst. Any empty directories after the sync on both the Path1 and Path2 -filesystems are not deleted by default. If the --remove-empty-dirs flag -is specified, then both paths will have any empty directories purged as -the last step in the process. +filesystems are not deleted by default, unless --create-empty-src-dirs +is specified. If the --remove-empty-dirs flag is specified, then both +paths will have ALL empty directories purged as the last step in the +process. Command-line flags @@ -15955,16 +17687,27 @@ Command-line flags This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 -will be copied to Path1, and the process will then sync the Path1 tree +will be copied to Path1, and the process will then copy the Path1 tree to Path2. -The base directories on the both Path1 and Path2 filesystems must exist -or bisync will fail. This is required for safety - that bisync can -verify that both paths are valid. +The --resync sequence is roughly equivalent to: -When using --resync, a newer version of a file either on Path1 or Path2 -filesystem, will overwrite the file on the other path (only the last -version will be kept). Carefully evaluate deltas using --dry-run. + rclone copy Path2 Path1 --ignore-existing + rclone copy Path1 Path2 + +Or, if using --create-empty-src-dirs: + + rclone copy Path2 Path1 --ignore-existing + rclone copy Path1 Path2 --create-empty-src-dirs + rclone copy Path2 Path1 --create-empty-src-dirs + +The base directories on both Path1 and Path2 filesystems must exist or +bisync will fail. This is required for safety - that bisync can verify +that both paths are valid. + +When using --resync, a newer version of a file on the Path2 filesystem +will be overwritten by the Path1 filesystem version. (Note that this is +NOT entirely symmetrical.) Carefully evaluate deltas using --dry-run. For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a @@ -15981,17 +17724,31 @@ deleting everything in the other path. Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not -generated automatically. For --check-accessto succeed, you must first -either: A) Place one or more RCLONE_TEST files in the Path1 or Path2 -filesystem and then do either a run without --check-access or a --resync -to set matching files on both filesystems, or B) Set --check-filename to -a filename already in use in various locations throughout your sync'd -fileset. Time stamps and file contents are not important, just the names -and locations. If you have symbolic links in your sync tree it is -recommended to place RCLONE_TEST files in the linked-to directory tree -to protect against bisync assuming a bunch of deleted files if the -linked-to tree should not be accessible. See also the --check-filename -flag. +generated automatically. For --check-access to succeed, you must first +either: A) Place one or more RCLONE_TEST files in both systems, or B) +Set --check-filename to a filename already in use in various locations +throughout your sync'd fileset. Recommended methods for A) include: * +rclone touch Path1/RCLONE_TEST (create a new file) * +rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST (copy an existing +file) * +rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include "RCLONE_TEST" +(copy multiple files at once, recursively) * create the files manually +(outside of rclone) * run bisync once without --check-access to set +matching files on both filesystems will also work, but is not preferred, +due to potential for user error (you are temporarily disabling the +safety feature). + +Note that --check-access is still enforced on --resync, so +bisync --resync --check-access will not work as a method of initially +setting the files (this is to ensure that bisync can't inadvertently +circumvent its own safety switch.) + +Time stamps and file contents for RCLONE_TEST files are not important, +just the names and locations. If you have symbolic links in your sync +tree it is recommended to place RCLONE_TEST files in the linked-to +directory tree to protect against bisync assuming a bunch of deleted +files if the linked-to tree should not be accessible. See also the +--check-filename flag. --check-filename @@ -16012,7 +17769,7 @@ to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To -force the sync either set a different delete percentage limit, e.g. +force the sync, either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check. @@ -16026,19 +17783,19 @@ sub-trees from the sync. See the bisync filters section and generic for non-allowed files for synching with Dropbox. If you make changes to your filters file then bisync requires a run with ---resync. This is a safety feature, which avoids existing files on the +--resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real. -To block this from happening bisync calculates an MD5 hash of the +To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as -your filters file. On the next runs with --filters-file set, bisync +your filters file. On the next run with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it -to the hash stored in .md5 file. If they don't match the run aborts with -a critical error and thus forces you to do a --resync, likely avoiding a -disaster. +to the hash stored in the .md5 file. If they don't match, the run aborts +with a critical error and thus forces you to do a --resync, likely +avoiding a disaster. --check-sync @@ -16058,6 +17815,68 @@ significantly reduce the sync run times for very large numbers of files. The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching. +See also: Concurrent modifications + +--ignore-listing-checksum + +By default, bisync will retrieve (or generate) checksums (for backends +that support them) when creating the listings for both paths, and store +the checksums in the listing files. --ignore-listing-checksum will +disable this behavior, which may speed things up considerably, +especially on backends (such as local) where hashes must be computed on +the fly instead of retrieved. Please note the following: + +- While checksums are (by default) generated and stored in the listing + files, they are NOT currently used for determining diffs (deltas). + It is anticipated that full checksum support will be added in a + future version. +- --ignore-listing-checksum is NOT the same as --ignore-checksum, and + you may wish to use one or the other, or both. In a nutshell: + --ignore-listing-checksum controls whether checksums are considered + when scanning for diffs, while --ignore-checksum controls whether + checksums are considered during the copy/sync operations that + follow, if there ARE diffs. +- Unless --ignore-listing-checksum is passed, bisync currently + computes hashes for one path even when there's no common hash with + the other path (for example, a crypt remote.) +- If both paths support checksums and have a common hash, AND + --ignore-listing-checksum was not specified when creating the + listings, --check-sync=only can be used to compare Path1 vs. Path2 + checksums (as of the time the previous listings were created.) + However, --check-sync=only will NOT include checksums if the + previous listings were generated on a run using + --ignore-listing-checksum. For a more robust integrity check of the + current state, consider using check (or cryptcheck, if at least one + path is a crypt remote.) + +--resilient + +Caution: this is an experimental feature. Use at your own risk! + +By default, most errors or interruptions will cause bisync to abort and +require --resync to recover. This is a safety feature, to prevent bisync +from running again until a user checks things out. However, in some +cases, bisync can go too far and enforce a lockout when one isn't +actually necessary, like for certain less-serious errors that might +resolve themselves on the next run. When --resilient is specified, +bisync tries its best to recover and self-correct, and only requires +--resync as a last resort when a human's involvement is absolutely +necessary. The intended use case is for running bisync as a background +process (such as via scheduled cron). + +When using --resilient mode, bisync will still report the error and +abort, however it will not lock out future runs -- allowing the +possibility of retrying at the next normally scheduled time, without +requiring a --resync first. Examples of such retryable errors include +access test failures, missing listing files, and filter change +detections. These safety features will still prevent the current run +from proceeding -- the difference is that if conditions have improved by +the time of the next run, that next run will be allowed to proceed. +Certain more serious errors will still enforce a --resync lockout, even +in --resilient mode, to prevent data loss. + +Behavior of --resilient may change in a future version. + Operation Runtime flow details @@ -16118,10 +17937,17 @@ Unusual sync checks ---------------------------------------------------------------------------- Type Description Result Implementation ----------------- --------------------- ------------------- ---------------- + Path1 new/changed File is new/changed No change None + AND Path2 on Path1 AND + new/changed AND new/changed on Path2 + Path1 == Path2 AND Path1 version is + currently identical + to Path2 + Path1 new AND File is new on Path1 Files renamed to rclone copy - Path2 new AND new on Path2 _Path1 and _Path2 _Path2 file to - Path1, - rclone copy + Path2 new AND new on Path2 (and _Path1 and _Path2 _Path2 file to + Path1 version is NOT Path1, + identical to Path2) rclone copy _Path1 file to Path2 @@ -16129,8 +17955,9 @@ Unusual sync checks Path1 changed Path2 AND also _Path1 and _Path2 _Path2 file to changed Path1, (newer/older/size) on rclone copy - Path1 _Path1 file to - Path2 + Path1 (and Path1 _Path1 file to + version is NOT Path2 + identical to Path2) Path2 newer AND File is newer on Path2 version rclone copy Path1 deleted Path2 AND also survives Path2 to Path1 @@ -16147,9 +17974,20 @@ Unusual sync checks Path2 ---------------------------------------------------------------------------- +As of rclone v1.64, bisync is now better at detecting false positive +sync conflicts, which would previously have resulted in unnecessary +renames and duplicates. Now, when bisync comes to a file that it wants +to rename (because it is new/changed on both sides), it first checks +whether the Path1 and Path2 versions are currently identical (using the +same underlying function as check.) If bisync concludes that the files +are identical, it will skip them and move on. Otherwise, it will create +renamed ..Path1 and ..Path2 duplicates, as before. This behavior also +improves the experience of renaming directories, as a --resync is no +longer required, so long as the same change has been made on both sides. + All files changed check -if all prior existing files on either of the filesystems have changed +If all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force to force the sync @@ -16186,7 +18024,7 @@ It is recommended to use --resync --dry-run --verbose initially and carefully review what changes will be made before running the --resync without --dry-run. -Most of these events come up due to a error status from an internal +Most of these events come up due to an error status from an internal call. On such a critical error the {...}.path1.lst and {...}.path2.lst listing files are renamed to extension .lst-err, which blocks any future bisync runs (since the normal .lst files are not found). Bisync keeps @@ -16196,6 +18034,8 @@ at ${HOME}/.cache/rclone/bisync/ on Linux. Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs. +See also: --resilient + Lock file When bisync is running, a lock file is created in the bisync working @@ -16230,10 +18070,9 @@ It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below. -First release of rclone bisync requires that underlying backend -supported the modification time feature and will refuse to run -otherwise. This limitation will be lifted in a future rclone bisync -release. +First release of rclone bisync requires that underlying backend supports +the modification time feature and will refuse to run otherwise. This +limitation will be lifted in a future rclone bisync release. Concurrent modifications @@ -16245,36 +18084,104 @@ be solved in a future release, there is no workaround at the moment. Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is -getting hammered by running processes during the sync. The solution is -to sync at quiet times or filter out unnecessary directories and files. +getting hammered by running processes during the sync. The currently +recommended solution is to sync at quiet times or filter out unnecessary +directories and files. + +As an alternative approach, consider using --check-sync=false (and +possibly --resilient) to make bisync more forgiving of filesystems that +change during the sync. Be advised that this may cause bisync to miss +events that occur during a bisync run, so it is a good idea to +supplement this with a periodic independent integrity check, and +corrective sync if diffs are found. For example, a possible sequence +could look like this: + +1. Normally scheduled bisync run: + + rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient + +2. Periodic independent integrity check (perhaps scheduled nightly or + weekly): + + rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt + +3. If diffs are found, you have some choices to correct them. If one + side is more up-to-date and you want to make the other side match + it, you could run: + + rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v + +(or switch Path1 and Path2 to make Path2 the source-of-truth) + +Or, if neither side is totally up-to-date, you could run a --resync to +bring them back into agreement (but remember that this could cause +deleted files to re-appear.) + +*Note also that rclone check does not currently include empty +directories, so if you want to know if any empty directories are out of +sync, consider alternatively running the above rclone sync command with +--dry-run added. Empty directories -New empty directories on one path are not propagated to the other side. -This is because bisync (and rclone) natively works on files not -directories. The following sequence is a workaround but will not -propagate the delete of an empty directory to the other side: - - rclone bisync PATH1 PATH2 - rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs - rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs +By default, new/deleted empty directories on one path are not propagated +to the other side. This is because bisync (and rclone) natively works on +files, not directories. However, this can be changed with the +--create-empty-src-dirs flag, which works in much the same way as in +sync and copy. When used, empty directories created or deleted on one +side will also be created or deleted on the other side. The following +should be noted: * --create-empty-src-dirs is not compatible with +--remove-empty-dirs. Use only one or the other (or neither). * It is not +recommended to switch back and forth between --create-empty-src-dirs and +the default (no --create-empty-src-dirs) without running --resync. This +is because it may appear as though all directories (not just the empty +ones) were created/deleted, when actually you've just toggled between +making them visible/invisible to bisync. It looks scarier than it is, +but it's still probably best to stick to one or the other, and use +--resync when you need to switch. Renamed directories -Renaming a folder on the Path1 side results is deleting all files on the +Renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all -files in the new directory name as new. Similarly, renaming a directory -on both sides to the same name will result in creating ..path1 and -..path2 files on both sides. Currently the most effective and efficient -method of renaming a directory is to rename it on both sides, then do a ---resync. +files in the new directory name as new. Currently, the most effective +and efficient method of renaming a directory is to rename it to the same +name on both sides. (As of rclone v1.64, a --resync is no longer +required after doing so, as bisync will automatically detect that Path1 +and Path2 are in agreement.) + +--fast-list used by default + +Unlike most other rclone commands, bisync uses --fast-list by default, +for backends that support it. In many cases this is desirable, however, +there are some scenarios in which bisync could be faster without +--fast-list, and there is also a known issue concerning Google Drive +users with many empty directories. For now, the recommended way to avoid +using --fast-list is to add --disable ListR to all bisync commands. The +default behavior may change in a future version. + +Overridden Configs + +When rclone detects an overridden config, it adds a suffix like {ABCDE} +on the fly to the internal name of the remote. Bisync follows suit by +including this suffix in its listing filenames. However, this suffix +does not necessarily persist from run to run, especially if different +flags are provided. So if next time the suffix assigned is {FGHIJ}, +bisync will get confused, because it's looking for a listing file with +{FGHIJ}, when the file it wants has {ABCDE}. As a result, it throws +Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run +and refuses to run again until the user runs a --resync (unless using +--resilient). The best workaround at the moment is to set any +backend-specific flags in the config file instead of specifying them +with command flags. (You can still override them as needed for other +rclone commands.) Case sensitivity Synching with case-insensitive filesystems, such as Windows or Box, can result in file name conflicts. This will be fixed in a future release. -The near term workaround is to make sure that files on both sides don't +The near-term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg vs. smile.jpg). Windows support @@ -16339,7 +18246,7 @@ Filters file writing guidelines faster. - Specific files may also be excluded, as with the Dropbox exclusions example below. -2. Decide if its easier (or cleaner) to: +2. Decide if it's easier (or cleaner) to: - Include select directories and therefore exclude everything else -- or -- - Exclude select directories and therefore include everything else @@ -16360,7 +18267,7 @@ Filters file writing guidelines -/Desktop/tempfiles/, or `- /testdir/. Again, a**` on the end is not necessary. - Do not add a `- **` in the file. Without this line, everything - will be included that has not be explicitly excluded. + will be included that has not been explicitly excluded. - Disregard step 3. A few rules for the syntax of a filter file expanding on filtering @@ -16502,7 +18409,7 @@ The --dry-run messages may indicate that it would try to delete some files. For example, if a file is new on Path2 and does not exist on Path1 then it would normally be copied to Path1, but with --dry-run enabled those copies don't happen, which leads to the attempted delete -on the Path2, blocked again by --dry-run: ... Not deleting as --dry-run. +on Path2, blocked again by --dry-run: ... Not deleting as --dry-run. This whole confusing situation is an artifact of the --dry-run flag. Scrutinize the proposed deletes carefully, and if the files would have @@ -16511,15 +18418,15 @@ disregarded. Retries -Rclone has built in retries. If you run with --verbose you'll see error +Rclone has built-in retries. If you run with --verbose you'll see error and retry messages such as shown below. This is usually not a bug. If at -the end of the run you see Bisync successful and not +the end of the run, you see Bisync successful and not Bisync critical error or Bisync aborted then the run was successful, and you can ignore the error messages. -The following run shows an intermittent fail. Lines 5 and _6- are low -level messages. Line 6 is a bubbled-up warning message, conveying the -error. Rclone normally retries failing commands, so there may be +The following run shows an intermittent fail. Lines 5 and _6- are +low-level messages. Line 6 is a bubbled-up warning message, conveying +the error. Rclone normally retries failing commands, so there may be numerous such messages in the log. Since there are no final error/warning messages on line 7, rclone has @@ -16591,8 +18498,7 @@ and an OwnCloud server, with output logged to a runlog file: # Command */5 * * * * /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log -See crontab syntax). for the details of crontab time interval -expressions. +See crontab syntax for the details of crontab time interval expressions. If you run rclone bisync as a cron job, redirect stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all @@ -16777,9 +18683,9 @@ Notes about testing check file mismatches in the test tree. - Some Dropbox tests can fail, notably printing the following message: src and dst identical but can't set mod time without deleting and re-uploading - This is expected and happens due a way Dropbox handles modification - times. You should use the -refresh-times test flag to make up for - this. + This is expected and happens due to the way Dropbox handles + modification times. You should use the -refresh-times test flag to + make up for this. - If Dropbox tests hit request limit for you and print error message too_many_requests/...: Too many requests or write operations. then follow the Dropbox App ID instructions. @@ -16935,11 +18841,173 @@ rclone bisync is similar in nature to a range of other projects: Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See -the Dual Shadow Method section in the Neil Fraser's article. +the Dual Shadow Method section in Neil Fraser's article. Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general. +Changelog + +v1.64 + +- Fixed an issue causing dry runs to inadvertently commit filter + changes +- Fixed an issue causing --resync to erroneously delete empty folders + and duplicate files unique to Path2 +- --check-access is now enforced during --resync, preventing data loss + in certain user error scenarios +- Fixed an issue causing bisync to consider more files than necessary + due to overbroad filters during delete operations +- Improved detection of false positive change conflicts (identical + files are now left alone instead of renamed) +- Added support for --create-empty-src-dirs +- Added experimental --resilient mode to allow recovery from + self-correctable errors +- Added new --ignore-listing-checksum flag to distinguish from + --ignore-checksum +- Performance improvements for large remotes +- Documentation and testing improvements + +Release signing + +The hashes of the binary artefacts of the rclone release are signed with +a public PGP/GPG key. This can be verified manually as described below. + +The same mechanism is also used by rclone selfupdate to verify that the +release has not been tampered with before the new update is installed. +This checks the SHA256 hash and the signature with a public key compiled +into the rclone binary. + +Release signing key + +You may obtain the release signing key from: + +- From KEYS on this website - this file contains all past signing keys + also. +- The git repository hosted on GitHub - + https://github.com/rclone/rclone/blob/master/docs/content/KEYS +- gpg --keyserver hkps://keys.openpgp.org --search nick@craig-wood.com +- gpg --keyserver hkps://keyserver.ubuntu.com --search nick@craig-wood.com +- https://www.craig-wood.com/nick/pub/pgp-key.txt + +After importing the key, verify that the fingerprint of one of the keys +matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA as this key is used +for signing. + +We recommend that you cross-check the fingerprint shown above through +the domains listed below. By cross-checking the integrity of the +fingerprint across multiple domains you can be confident that you +obtained the correct key. + +- The source for this page on GitHub. +- Through DNS dig key.rclone.org txt + +If you find anything that doesn't not match, please contact the +developers at once. + +How to verify the release + +In the release directory you will see the release files and some files +called MD5SUMS, SHA1SUMS and SHA256SUMS. + + $ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http: + MD5SUMS + SHA1SUMS + SHA256SUMS + rclone-v1.63.1-freebsd-386.zip + rclone-v1.63.1-freebsd-amd64.zip + ... + rclone-v1.63.1-windows-arm64.zip + rclone-v1.63.1.tar.gz + version.txt + +The MD5SUMS, SHA1SUMS and SHA256SUMS contain hashes of the binary files +in the release directory along with a signature. + +For example: + + $ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS + -----BEGIN PGP SIGNED MESSAGE----- + Hash: SHA1 + + f6d1b2d7477475ce681bdce8cb56f7870f174cb6b2a9ac5d7b3764296ea4a113 rclone-v1.63.1-freebsd-386.zip + 7266febec1f01a25d6575de51c44ddf749071a4950a6384e4164954dff7ac37e rclone-v1.63.1-freebsd-amd64.zip + ... + 66ca083757fb22198309b73879831ed2b42309892394bf193ff95c75dff69c73 rclone-v1.63.1-windows-amd64.zip + bbb47c16882b6c5f2e8c1b04229378e28f68734c613321ef0ea2263760f74cd0 rclone-v1.63.1-windows-arm64.zip + -----BEGIN PGP SIGNATURE----- + + iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZLVKJQAKCRCTk14C/ztU + +pZuAJ0XJ+QWLP/3jCtkmgcgc4KAwd/rrwCcCRZQ7E+oye1FPY46HOVzCFU3L7g= + =8qrL + -----END PGP SIGNATURE----- + +Download the files + +The first step is to download the binary and SUMs file and verify that +the SUMs you have downloaded match. Here we download +rclone-v1.63.1-windows-amd64.zip - choose the binary (or binaries) +appropriate to your architecture. We've also chosen the SHA256SUMS as +these are the most secure. You could verify the other types of hash also +for extra security. rclone selfupdate verifies just the SHA256SUMS. + + $ mkdir /tmp/check + $ cd /tmp/check + $ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . + $ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . + +Verify the signatures + +First verify the signatures on the SHA256 file. + +Import the key. See above for ways to verify this key is correct. + + $ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA + gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood " imported + gpg: Total number processed: 1 + gpg: imported: 1 + +Then check the signature: + + $ gpg --verify SHA256SUMS + gpg: Signature made Mon 17 Jul 2023 15:03:17 BST + gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA + gpg: Good signature from "Nick Craig-Wood " [ultimate] + +Verify the signature was good and is using the fingerprint shown above. + +Repeat for MD5SUMS and SHA1SUMS if desired. + +Verify the hashes + +Now that we know the signatures on the hashes are OK we can verify the +binaries match the hashes, completing the verification. + + $ sha256sum -c SHA256SUMS 2>&1 | grep OK + rclone-v1.63.1-windows-amd64.zip: OK + +Or do the check with rclone + + $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip + 2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0 + 2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1 + 2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 49 + 2023/09/11 10:53:58 NOTICE: SHA256SUMS: 4 warning(s) suppressed... + = rclone-v1.63.1-windows-amd64.zip + 2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 0 differences found + 2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 1 matching files + +Verify signatures and hashes together + +You can verify the signatures and hashes in one command line like this: + + $ gpg --decrypt SHA256SUMS | sha256sum -c --ignore-missing + gpg: Signature made Mon 17 Jul 2023 15:03:17 BST + gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA + gpg: Good signature from "Nick Craig-Wood " [ultimate] + gpg: aka "Nick Craig-Wood " [unknown] + rclone-v1.63.1-windows-amd64.zip: OK + 1Fichier This is a backend for the 1fichier cloud storage service. Note that a @@ -17591,6 +19659,7 @@ The S3 backend can be used with a number of different providers: - IBM COS S3 - IDrive e2 - IONOS Cloud +- Leviia Object Storage - Liara Object Storage - Minio - Petabox @@ -17601,6 +19670,7 @@ The S3 backend can be used with a number of different providers: - SeaweedFS - StackPath - Storj +- Synology C2 Object Storage - Tencent Cloud Object Storage (COS) - Wasabi @@ -18019,6 +20089,19 @@ Clean up all the old versions and show that they've gone. $ rclone -q --s3-versions ls s3:cleanup-test 9 one.txt +Versions naming caveat + +When using --s3-versions flag rclone is relying on the file name to work +out whether the objects are versions or not. Versions' names are created +by inserting timestamp between file name and its extension. + + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt + +If there are real files present with the same names as versions, then +behaviour of --s3-versions can be unpredictable. + Cleanup If you run rclone cleanup s3:bucket then it will remove all pending @@ -18215,9 +20298,9 @@ Standard options Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, -IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, -RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and -Wasabi). +IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, +Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, +Tencent COS, Qiniu and Wasabi). --s3-provider @@ -18258,6 +20341,8 @@ Properties: - IONOS Cloud - "LyveCloud" - Seagate Lyve Cloud + - "Leviia" + - Leviia Object Storage - "Liara" - Liara Object Storage - "Minio" @@ -18276,6 +20361,8 @@ Properties: - StackPath Object Storage - "Storj" - Storj (S3 Compatible Gateway) + - "Synology" + - Synology C2 Object Storage - "TencentCOS" - Tencent Cloud Object Storage (COS) - "Wasabi" @@ -18629,6 +20716,29 @@ Properties: --s3-region +Region where your data stored. + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Synology +- Type: string +- Required: false +- Examples: + - "eu-001" + - Europe Region 1 + - "eu-002" + - Europe Region 2 + - "us-001" + - US Region 1 + - "us-002" + - US Region 2 + - "tw-001" + - Asia (Taiwan) + +--s3-region + Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. @@ -18638,7 +20748,7 @@ Properties: - Config: region - Env Var: RCLONE_S3_REGION - Provider: - !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive + !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive - Type: string - Required: false - Examples: @@ -18944,6 +21054,22 @@ Properties: --s3-endpoint +Endpoint for Leviia Object Storage API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: Leviia +- Type: string +- Required: false +- Examples: + - "s3.leviia.com" + - The default endpoint + - Leviia + +--s3-endpoint + Endpoint for Liara Object Storage API. Properties: @@ -19134,6 +21260,29 @@ Properties: --s3-endpoint +Endpoint for Synology C2 Object Storage API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: Synology +- Type: string +- Required: false +- Examples: + - "eu-001.s3.synologyc2.net" + - EU Endpoint 1 + - "eu-002.s3.synologyc2.net" + - EU Endpoint 2 + - "us-001.s3.synologyc2.net" + - US Endpoint 1 + - "us-002.s3.synologyc2.net" + - US Endpoint 2 + - "tw-001.s3.synologyc2.net" + - TW Endpoint 1 + +--s3-endpoint + Endpoint for Tencent COS API. Properties: @@ -19272,7 +21421,7 @@ Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Provider: - !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,RackCorp,Qiniu,Petabox + !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox - Type: string - Required: false - Examples: @@ -19661,7 +21810,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Provider: - !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox + !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox - Type: string - Required: false @@ -19685,7 +21834,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Cloudflare +- Provider: !Storj,Synology,Cloudflare - Type: string - Required: false - Examples: @@ -19953,9 +22102,9 @@ Advanced options Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, -IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, -RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and -Wasabi). +IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, +Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, +Tencent COS, Qiniu and Wasabi). --s3-bucket-acl @@ -20450,11 +22599,7 @@ Properties: --s3-memory-pool-flush-time -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use -memory pool for allocations. This option controls how often unused -buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -20465,7 +22610,7 @@ Properties: --s3-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: @@ -20744,18 +22889,18 @@ normal storage. Usage Examples: - rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] + rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags - rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 All the objects shown will be marked for restore, then - rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not. @@ -20763,11 +22908,11 @@ The Status will be OK if it was successful or an error message if not. [ { "Status": "OK", - "Path": "test.txt" + "Remote": "test.txt" }, { "Status": "OK", - "Path": "test/file4.txt" + "Remote": "test/file4.txt" } ] @@ -20777,6 +22922,52 @@ Options: - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk +restore-status + +Show the restore status for objects being restored from GLACIER to +normal storage + + rclone backend restore-status remote: [options] [+] + +This command can be used to show the status for objects being restored +from GLACIER to normal storage. + +Usage Examples: + + rclone backend restore-status s3:bucket/path/to/object + rclone backend restore-status s3:bucket/path/to/directory + rclone backend restore-status -o all s3:bucket/path/to/directory + +This command does not obey the filters. + +It returns a list of status dictionaries. + + [ + { + "Remote": "file.txt", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "GLACIER" + }, + { + "Remote": "test.pdf", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": false, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "DEEP_ARCHIVE" + } + ] + +Options: + +- "all": if set then show all objects, not just ones with restore + status + list-multipart-uploads List the unfinished multipart uploads @@ -20866,6 +23057,29 @@ It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning has been enabled the status can't be set back to "Unversioned". +set + +Set command for updating the config parameters. + + rclone backend set remote: [options] [+] + +This set command can be used to update the config parameters for a +running s3 backend. + +Usage Examples: + + rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X + +The option keys are named as they are in the config file. + +This rebuilds the connection to the s3 backend when it is called with +the new parameters. Only new parameters need be passed as the values +will default to those currently in use. + +It doesn't return anything. + Anonymous access to public buckets If you want to use rclone to access a public bucket, configure with a @@ -21000,7 +23214,7 @@ of a bucket publicly. Type of storage to configure. Choose a number from below, or type in your own value. ... - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) ... Storage> s3 @@ -21184,7 +23398,7 @@ Or you can also configure via the interactive command line: Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> 5 @@ -21475,7 +23689,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> s3 @@ -21581,7 +23795,7 @@ Type s3 to choose the connection type: Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3) [snip] Storage> s3 @@ -21815,7 +24029,7 @@ To configure access to Qiniu Kodo, follow the steps below: \ (alias) 4 / Amazon Drive \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi \ (s3) [snip] Storage> s3 @@ -22663,6 +24877,119 @@ This will guide you through an interactive setup process. d) Delete this remote y/e/d> y +Leviia Cloud Object Storage + +Leviia Object Storage, backup and secure your data in a 100% French +cloud, independent of GAFAM.. + +To configure access to Leviia, follow the steps below: + +1. Run rclone config and select n for a new remote. + + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + +2. Give the name of the configuration. For example, name it 'leviia'. + + name> leviia + +3. Select s3 storage. + + Choose a number from below, or type in your own value + 1 / 1Fichier + \ (fichier) + 2 / Akamai NetStorage + \ (netstorage) + 3 / Alias for an existing remote + \ (alias) + 4 / Amazon Drive + \ (amazon cloud drive) + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \ (s3) + [snip] + Storage> s3 + +4. Select Leviia provider. + + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 15 / Leviia Object Storage + \ (Leviia) + [snip] + provider> Leviia + +5. Enter your SecretId and SecretKey of Leviia. + + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> ZnIx.xxxxxxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + +6. Select endpoint for Leviia. + + / The default endpoint + 1 | Leviia. + \ (s3.leviia.com) + [snip] + endpoint> 1 + +7. Choose acl. + + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [leviia] + - type: s3 + - provider: Leviia + - access_key_id: ZnIx.xxxxxxx + - secret_access_key: xxxxxxxx + - endpoint: s3.leviia.com + - acl: private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + leviia s3 + Liara Here is an example of making a Liara Object Storage configuration. First @@ -23255,4183 +25582,4033 @@ mfs (most free space) as a member of an rclone union remote. See List of backends that do not support rclone about and rclone about -Backblaze B2 +Synology C2 Object Storage -B2 is Backblaze's cloud storage system. +Synology C2 Object Storage provides a secure, S3-compatible, and +cost-effective cloud storage solution without API request, download +fees, and deletion penalty. -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, e.g. remote:bucket/path/to/dir. +The S3 compatible gateway is configured using rclone config with a type +of s3 and with a provider name of Synology. Here is an example run of +the configurator. -Configuration - -Here is an example of making a b2 configuration. First run +First run: rclone config -This will guide you through an interactive setup process. To -authenticate you will either need your Account ID (a short hex number) -and Master Application Key (a long hex number) OR an Application Key, -which is the recommended method. See below for further details on -generating and using an Application Key. - - No remotes found, make a new one? - n) New remote - q) Quit config - n/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Backblaze B2 - \ "b2" - [snip] - Storage> b2 - Account ID or Application Key ID - account> 123456789abc - Application Key - key> 0123456789abcdef0123456789abcdef0123456789 - Endpoint for the service - leave blank normally. - endpoint> - Remote config - -------------------- - [remote] - account = 123456789abc - key = 0123456789abcdef0123456789abcdef0123456789 - endpoint = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all buckets - - rclone lsd remote: - -Create a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync /home/local/directory to the remote bucket, deleting any excess -files in the bucket. - - rclone sync --interactive /home/local/directory remote:bucket - -Application Keys - -B2 supports multiple Application Keys for different access permission to -B2 Buckets. - -You can use these with rclone too; you will need to use rclone version -1.43 or later. - -Follow Backblaze's docs to create an Application Key with the required -permission and add the applicationKeyId as the account and the -Application Key itself as the key. - -Note that you must put the applicationKeyId as the account – you can't -use the master Account ID. If you try then B2 will return 401 errors. - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Modified time - -The modified time is stored as metadata on the object as -X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in -the Backblaze standard. Other tools should be able to use this as a -modified time. - -Modified times are used in syncing and are fully supported. Note that if -a modification time needs to be updated on an object then it will create -a new version of the object. - -Restricted filename characters - -In addition to the default restricted characters set the following -characters are also replaced: - - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Note that in 2020-05 Backblaze started allowing  characters in file -names. Rclone hasn't changed its encoding as this could cause syncs to -re-transfer files. If you want rclone not to replace  then see the ---b2-encoding flag below and remove the BackSlash from the string. This -can be set in the config. - -SHA1 checksums - -The SHA1 checksums of the files are checked on upload and download and -will be used in the syncing process. - -Large files (bigger than the limit in --b2-upload-cutoff) which are -uploaded in chunks will store their SHA1 on the object as -X-Bz-Info-large_file_sha1 as recommended by Backblaze. - -For a large file to be uploaded with an SHA1 checksum, the source needs -to support SHA1 checksums. The local disk supports SHA1 checksums so -large file transfers from local disk will have an SHA1. See the overview -for exactly which remotes support SHA1. - -Sources which don't support SHA1, in particular crypt will upload large -files without SHA1 checksums. This may be fixed in the future (see -#1767). - -Files sizes below --b2-upload-cutoff will always have an SHA1 regardless -of the source. - -Transfers - -Backblaze recommends that you do lots of transfers simultaneously for -maximum speed. In tests from my SSD equipped laptop the optimum setting -is about --transfers 32 though higher numbers may be used for a slight -speed improvement. The optimum number for you may vary depending on your -hardware, how big the files are, how much you want to load your -computer, etc. The default of --transfers 4 is definitely too low for -Backblaze B2 though. - -Note that uploading big files (bigger than 200 MiB by default) will use -a 96 MiB RAM buffer by default. There can be at most --transfers of -these in use at any moment, so this sets the upper limit on the memory -used. - -Versions - -When rclone uploads a new version of a file it creates a new version of -it. Likewise when you delete a file, the old version will be marked -hidden and still be available. Conversely, you may opt in to a "hard -delete" of files with the --b2-hard-delete flag which would permanently -remove the file instead of hiding it. - -Old versions of files, where available, are visible using the ---b2-versions flag. - -It is also possible to view a bucket as it was at a certain point in -time, using the --b2-version-at flag. This will show the file versions -as they were at that time, showing files that have been deleted -afterwards, and hiding files that were created since. - -If you wish to remove all the old versions then you can use the -rclone cleanup remote:bucket command which will delete all the old -versions of files, leaving the current ones intact. You can also supply -a path and only old versions under that path will be deleted, e.g. -rclone cleanup remote:bucket/path/to/stuff. - -Note that cleanup will remove partially uploaded files from the bucket -if they are more than a day old. - -When you purge a bucket, the current and the old versions will be -deleted then the bucket will be deleted. - -However delete will cause the current versions of the files to become -hidden old versions. - -Here is a session showing the listing and retrieval of an old version -followed by a cleanup of the old versions. - -Show current version and all the versions with --b2-versions flag. - - $ rclone -q ls b2:cleanup-test - 9 one.txt - - $ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt - 8 one-v2016-07-04-141032-000.txt - 16 one-v2016-07-04-141003-000.txt - 15 one-v2016-07-02-155621-000.txt - -Retrieve an old version - - $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp - - $ ls -l /tmp/one-v2016-07-04-141003-000.txt - -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt - -Clean up all the old versions and show that they've gone. - - $ rclone -q cleanup b2:cleanup-test - - $ rclone -q ls b2:cleanup-test - 9 one.txt - - $ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt - -Data usage - -It is useful to know how many requests are sent to the server in -different scenarios. - -All copy commands send the following 4 requests: - - /b2api/v1/b2_authorize_account - /b2api/v1/b2_create_bucket - /b2api/v1/b2_list_buckets - /b2api/v1/b2_list_file_names - -The b2_list_file_names request will be sent once for every 1k files in -the remote path, providing the checksum and modification time of the -listed files. As of version 1.33 issue #818 causes extra requests to be -sent when using B2 with Crypt. When a copy operation does not require -any files to be uploaded, no more requests will be sent. - -Uploading files that do not require chunking, will send 2 requests per -file upload: - - /b2api/v1/b2_get_upload_url - /b2api/v1/b2_upload_file/ - -Uploading files requiring chunking, will send 2 requests (one each to -start and finish the upload) and another 2 requests for each chunk: - - /b2api/v1/b2_start_large_file - /b2api/v1/b2_get_upload_part_url - /b2api/v1/b2_upload_part/ - /b2api/v1/b2_finish_large_file - -Versions - -Versions can be viewed with the --b2-versions flag. When it is set -rclone will show and act on older versions of files. For example - -Listing without --b2-versions - - $ rclone -q ls b2:cleanup-test - 9 one.txt - -And with - - $ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt - 8 one-v2016-07-04-141032-000.txt - 16 one-v2016-07-04-141003-000.txt - 15 one-v2016-07-02-155621-000.txt - -Showing that the current version is unchanged but older versions can be -seen. These have the UTC date that they were uploaded to the server to -the nearest millisecond appended to them. - -Note that when using --b2-versions no file write operations are -permitted, so you can't upload files or delete them. - -B2 and rclone link - -Rclone supports generating file share links for private B2 buckets. They -can either be for a file for example: - - ./rclone link B2:bucket/path/to/file.txt - https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx - -or if run on a directory you will get: - - ./rclone link B2:bucket/path - https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx - -you can then use the authorization token (the part of the url from the -?Authorization= on) on any file path under that directory. For example: - - https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx - https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx - https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx - -Standard options - -Here are the Standard options specific to b2 (Backblaze B2). - ---b2-account - -Account ID or Application Key ID. - -Properties: - -- Config: account -- Env Var: RCLONE_B2_ACCOUNT -- Type: string -- Required: true - ---b2-key - -Application Key. - -Properties: - -- Config: key -- Env Var: RCLONE_B2_KEY -- Type: string -- Required: true - ---b2-hard-delete - -Permanently delete files on remote removal, otherwise hide files. - -Properties: - -- Config: hard_delete -- Env Var: RCLONE_B2_HARD_DELETE -- Type: bool -- Default: false - -Advanced options - -Here are the Advanced options specific to b2 (Backblaze B2). - ---b2-endpoint - -Endpoint for the service. - -Leave blank normally. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_B2_ENDPOINT -- Type: string -- Required: false - ---b2-test-mode - -A flag string for X-Bz-Test-Mode header for debugging. - -This is for debugging purposes only. Setting it to one of the strings -below will cause b2 to return specific errors: - -- "fail_some_uploads" -- "expire_some_account_authorization_tokens" -- "force_cap_exceeded" - -These will be set in the "X-Bz-Test-Mode" header which is documented in -the b2 integrations checklist. - -Properties: - -- Config: test_mode -- Env Var: RCLONE_B2_TEST_MODE -- Type: string -- Required: false - ---b2-versions - -Include old versions in directory listings. - -Note that when using this no file write operations are permitted, so you -can't upload files or delete them. - -Properties: - -- Config: versions -- Env Var: RCLONE_B2_VERSIONS -- Type: bool -- Default: false - ---b2-version-at - -Show file versions as they were at the specified time. - -Note that when using this no file write operations are permitted, so you -can't upload files or delete them. - -Properties: - -- Config: version_at -- Env Var: RCLONE_B2_VERSION_AT -- Type: Time -- Default: off - ---b2-upload-cutoff - -Cutoff for switching to chunked upload. - -Files above this size will be uploaded in chunks of "--b2-chunk-size". - -This value should be set no larger than 4.657 GiB (== 5 GB). - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_B2_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - ---b2-copy-cutoff - -Cutoff for switching to multipart copy. - -Any files larger than this that need to be server-side copied will be -copied in chunks of this size. - -The minimum is 0 and the maximum is 4.6 GiB. - -Properties: - -- Config: copy_cutoff -- Env Var: RCLONE_B2_COPY_CUTOFF -- Type: SizeSuffix -- Default: 4Gi - ---b2-chunk-size - -Upload chunk size. - -When uploading large files, chunk the file into this size. - -Must fit in memory. These chunks are buffered in memory and there might -a maximum of "--transfers" chunks in progress at once. - -5,000,000 Bytes is the minimum size. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_B2_CHUNK_SIZE -- Type: SizeSuffix -- Default: 96Mi - ---b2-disable-checksum - -Disable checksums for large (> upload cutoff) files. - -Normally rclone will calculate the SHA1 checksum of the input before -uploading it so it can add it to metadata on the object. This is great -for data integrity checking but can cause long delays for large files to -start uploading. - -Properties: - -- Config: disable_checksum -- Env Var: RCLONE_B2_DISABLE_CHECKSUM -- Type: bool -- Default: false - ---b2-download-url - -Custom endpoint for downloads. - -This is usually set to a Cloudflare CDN URL as Backblaze offers free -egress for data downloaded through the Cloudflare network. Rclone works -with private buckets by sending an "Authorization" header. If the custom -endpoint rewrites the requests for authentication, e.g., in Cloudflare -Workers, this header needs to be handled properly. Leave blank if you -want to use the endpoint provided by Backblaze. - -The URL provided here SHOULD have the protocol and SHOULD NOT have a -trailing slash or specify the /file/bucket subpath as rclone will -request files with "{download_url}/file/{bucket_name}/{path}". - -Example: > https://mysubdomain.mydomain.tld (No trailing "/", "file" or -"bucket") - -Properties: - -- Config: download_url -- Env Var: RCLONE_B2_DOWNLOAD_URL -- Type: string -- Required: false - ---b2-download-auth-duration - -Time before the authorization token will expire in s or suffix -ms|s|m|h|d. - -The duration before the download authorization token will expire. The -minimum value is 1 second. The maximum value is one week. - -Properties: - -- Config: download_auth_duration -- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION -- Type: Duration -- Default: 1w - ---b2-memory-pool-flush-time - -How often internal memory buffer pools will be flushed. Uploads which -requires additional buffers (f.e multipart) will use memory pool for -allocations. This option controls how often unused buffers will be -removed from the pool. - -Properties: - -- Config: memory_pool_flush_time -- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME -- Type: Duration -- Default: 1m0s - ---b2-memory-pool-use-mmap - -Whether to use mmap buffers in internal memory pool. - -Properties: - -- Config: memory_pool_use_mmap -- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP -- Type: bool -- Default: false - ---b2-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_B2_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - -Limitations - -rclone about is not supported by the B2 backend. Backends without this -capability cannot determine free space for an rclone mount or use policy -mfs (most free space) as a member of an rclone union remote. - -See List of backends that do not support rclone about and rclone about - -Box - -Paths are specified as remote:path - -Paths may be as deep as required, e.g. remote:directory/subdirectory. - -The initial setup for Box involves getting a token from Box which you -can do either in your browser, or with a config.json downloaded from Box -to use JWT authentication. rclone config walks you through it. - -Configuration - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: +This will guide you through an interactive setup process. No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config + n/s/q> n - name> remote + + Enter name for new remote.1 + name> syno + Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Box - \ "box" - [snip] - Storage> box - Box App Client Id - leave blank normally. - client_id> - Box App Client Secret - leave blank normally. - client_secret> - Box App config.json location - Leave blank normally. Enter a string value. Press Enter for the default (""). - box_config_file> - Box App Primary Access Token - Leave blank normally. + Choose a number from below, or type in your own value + + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \ "s3" + + Storage> s3 + + Choose your S3 provider. Enter a string value. Press Enter for the default (""). - access_token> - - Enter a string value. Press Enter for the default ("user"). Choose a number from below, or type in your own value - 1 / Rclone should act on behalf of a user - \ "user" - 2 / Rclone should act on behalf of a service account - \ "enterprise" - box_sub_type> - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = - client_secret = - token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + 24 / Synology C2 Object Storage + \ (Synology) -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + provider> Synology -Note that rclone runs a webserver on your local machine to collect the -token as returned from Box. This only runs from the moment it opens your -browser to the moment you get back the verification code. This is on -http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your Box - - rclone lsd remote: - -List all the files in your Box - - rclone ls remote: - -To copy a local directory to an Box directory called backup - - rclone copy /home/source remote:backup - -Using rclone with an Enterprise account with SSO - -If you have an "Enterprise" account type with Box with single sign on -(SSO), you need to create a password to use Box with rclone. This can be -done at your Enterprise Box account by going to Settings, "Account" Tab, -and then set the password in the "Authentication" field. - -Once you have done this, you can setup your Enterprise Box account using -the same procedure detailed above in the, using the password you have -just set. - -Invalid refresh token - -According to the box docs: - - Each refresh_token is valid for one use in 60 days. - -This means that if you - -- Don't use the box remote for 60 days -- Copy the config file with a box refresh token in and use it in two - places -- Get an error on a token refresh - -then rclone will return an error which includes the text -Invalid refresh token. - -To fix this you will need to use oauth2 again to update the refresh -token. You can use the methods in the remote setup docs, bearing in mind -that if you use the copy the config file method, you should not use that -remote on the computer you did the authentication on. - -Here is how to do it. - - $ rclone config - Current remotes: - - Name Type - ==== ==== - remote box - - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> e - Choose a number from below, or type in an existing value - 1 > remote - remote> remote - -------------------- - [remote] - type = box - token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} - -------------------- - Edit remote - Value "client_id" = "" - Edit? (y/n)> - y) Yes - n) No - y/n> n - Value "client_secret" = "" - Edit? (y/n)> - y) Yes - n) No - y/n> n - Remote config - Already have a token - refresh? - y) Yes - n) No - y/n> y - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - type = box - token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -Modified time and hashes - -Box allows modification times to be set on objects accurate to 1 second. -These will be used to detect whether objects need syncing or not. - -Box supports SHA1 type hashes, so you can use the --checksum flag. - -Restricted filename characters - -In addition to the default restricted characters set the following -characters are also replaced: - - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - -File names can also not end with the following characters. These only -get replaced if they are the last character in the name: - - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Transfers - -For files above 50 MiB rclone will use a chunked transfer. Rclone will -upload up to --transfers chunks at the same time (shared among all the -multipart uploads). Chunks are buffered in memory and are normally 8 MiB -so increasing --transfers will increase memory use. - -Deleting files - -Depending on the enterprise settings for your user, the item will either -be actually deleted from Box or moved to the trash. - -Emptying the trash is supported via the rclone however cleanup command -however this deletes every trashed file and folder individually so it -may take a very long time. Emptying the trash via the WebUI does not -have this limitation so it is advised to empty the trash via the WebUI. - -Root folder ID - -You can set the root_folder_id for rclone. This is the directory -(identified by its Folder ID) that rclone considers to be the root of -your Box drive. - -Normally you will leave this blank and rclone will determine the correct -root to use itself. - -However you can set this to restrict rclone to a specific folder -hierarchy. - -In order to do this you will have to find the Folder ID of the directory -you wish rclone to display. This will be the last segment of the URL -when you open the relevant folder in the Box web interface. - -So if the folder you want rclone to use has a URL which looks like -https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use -11xxxxxxxxx8 as the root_folder_id in the config. - -Standard options - -Here are the Standard options specific to box (Box). - ---box-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_BOX_CLIENT_ID -- Type: string -- Required: false - ---box-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_BOX_CLIENT_SECRET -- Type: string -- Required: false - ---box-box-config-file - -Box App config.json location - -Leave blank normally. - -Leading ~ will be expanded in the file name as will environment -variables such as ${RCLONE_CONFIG_DIR}. - -Properties: - -- Config: box_config_file -- Env Var: RCLONE_BOX_BOX_CONFIG_FILE -- Type: string -- Required: false - ---box-access-token - -Box App Primary Access Token - -Leave blank normally. - -Properties: - -- Config: access_token -- Env Var: RCLONE_BOX_ACCESS_TOKEN -- Type: string -- Required: false - ---box-box-sub-type - -Properties: - -- Config: box_sub_type -- Env Var: RCLONE_BOX_BOX_SUB_TYPE -- Type: string -- Default: "user" -- Examples: - - "user" - - Rclone should act on behalf of a user. - - "enterprise" - - Rclone should act on behalf of a service account. - -Advanced options - -Here are the Advanced options specific to box (Box). - ---box-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_BOX_TOKEN -- Type: string -- Required: false - ---box-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_BOX_AUTH_URL -- Type: string -- Required: false - ---box-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_BOX_TOKEN_URL -- Type: string -- Required: false - ---box-root-folder-id - -Fill in for rclone to use a non root folder as its starting point. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_BOX_ROOT_FOLDER_ID -- Type: string -- Default: "0" - ---box-upload-cutoff - -Cutoff for switching to multipart upload (>= 50 MiB). - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_BOX_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 50Mi - ---box-commit-retries - -Max number of times to try committing a multipart file. - -Properties: - -- Config: commit_retries -- Env Var: RCLONE_BOX_COMMIT_RETRIES -- Type: int -- Default: 100 - ---box-list-chunk - -Size of listing chunk 1-1000. - -Properties: - -- Config: list_chunk -- Env Var: RCLONE_BOX_LIST_CHUNK -- Type: int -- Default: 1000 - ---box-owned-by - -Only show items owned by the login (email address) passed in. - -Properties: - -- Config: owned_by -- Env Var: RCLONE_BOX_OWNED_BY -- Type: string -- Required: false - ---box-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_BOX_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot - -Limitations - -Note that Box is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -Box file names can't have the \ character in. rclone maps this to and -from an identical looking unicode equivalent \ (U+FF3C Fullwidth -Reverse Solidus). - -Box only supports filenames up to 255 characters in length. - -Box has API rate limits that sometimes reduce the speed of rclone. - -rclone about is not supported by the Box backend. Backends without this -capability cannot determine free space for an rclone mount or use policy -mfs (most free space) as a member of an rclone union remote. - -See List of backends that do not support rclone about and rclone about - -Cache - -The cache remote wraps another existing remote and stores file structure -and its data for long running tasks like rclone mount. - -Status - -The cache backend code is working but it currently doesn't have a -maintainer so there are outstanding bugs which aren't getting fixed. - -The cache backend is due to be phased out in favour of the VFS caching -layer eventually which is more tightly integrated into rclone. - -Until this happens we recommend only using the cache backend if you find -you can't work without it. There are many docs online describing the use -of the cache backend to minimize API hits and by-and-large these are out -of date and the cache backend isn't needed in those scenarios any more. - -Configuration - -To get started you just need to have an existing remote which can be -configured with cache. - -Here is an example of how to make a remote called test-cache. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> test-cache - Type of storage to configure. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value - [snip] - XX / Cache a remote - \ "cache" - [snip] - Storage> cache - Remote to cache. - Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", - "myremote:bucket" or maybe "myremote:" (not recommended). - remote> local:/test - Optional: The URL of the Plex server - plex_url> http://127.0.0.1:32400 - Optional: The username of the Plex user - plex_username> dummyusername - Optional: The password of the Plex user - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank - y/g/n> y - Enter the password: - password: - Confirm the password: - password: - The size of a chunk. Lower value good for slow connections but can affect seamless reading. - Default: 5M - Choose a number from below, or type in your own value - 1 / 1 MiB - \ "1M" - 2 / 5 MiB - \ "5M" - 3 / 10 MiB - \ "10M" - chunk_size> 2 - How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. - Accepted units are: "s", "m", "h". - Default: 5m - Choose a number from below, or type in your own value - 1 / 1 hour - \ "1h" - 2 / 24 hours - \ "24h" - 3 / 24 hours - \ "48h" - info_age> 2 - The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. - Default: 10G - Choose a number from below, or type in your own value - 1 / 500 MiB - \ "500M" - 2 / 1 GiB - \ "1G" - 3 / 10 GiB - \ "10G" - chunk_total_size> 3 - Remote config - -------------------- - [test-cache] - remote = local:/test - plex_url = http://127.0.0.1:32400 - plex_username = dummyusername - plex_password = *** ENCRYPTED *** - chunk_size = 5M - info_age = 48h - chunk_total_size = 10G + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" -You can then use it like this, + env_auth> 1 -List directories in top level of your drive - - rclone lsd test-cache: - -List all the files in your drive - - rclone ls test-cache: - -To start a cached mount - - rclone mount --allow-other test-cache: /var/tmp/test-cache - -Write Features - -Offline uploading - -In an effort to make writing through cache more reliable, the backend -now supports this feature which can be activated by specifying a -cache-tmp-upload-path. - -A files goes through these states when using this feature: - -1. An upload is started (usually by copying a file on the cache remote) -2. When the copy to the temporary location is complete the file is part - of the cached remote and looks and behaves like any other file - (reading included) -3. After cache-tmp-wait-time passes and the file is next in line, - rclone move is used to move the file to the cloud provider -4. Reading the file still works during the upload but most - modifications on it will be prohibited -5. Once the move is complete the file is unlocked for modifications as - it becomes as any other regular file -6. If the file is being read through cache when it's actually deleted - from the temporary path then cache will simply swap the source to - the cloud provider without interrupting the reading (small blip can - happen though) - -Files are uploaded in sequence and only one file is uploaded at a time. -Uploads will be stored in a queue and be processed based on the order -they were added. The queue and the temporary storage is persistent -across restarts but can be cleared on startup with the --cache-db-purge -flag. - -Write Support - -Writes are supported through cache. One caveat is that a mounted cache -remote does not add any retry or fallback mechanism to the upload -operation. This will depend on the implementation of the wrapped remote. -Consider using Offline uploading for reliable writes. - -One special case is covered with cache-writes which will cache the file -data at the same time as the upload when it is enabled making it -available from the cache store immediately once the upload is finished. - -Read Features - -Multiple connections - -To counter the high latency between a local PC where rclone is running -and cloud providers, the cache remote can split multiple requests to the -cloud provider for smaller file chunks and combines them together -locally where they can be available almost immediately before the reader -usually needs them. - -This is similar to buffering when media files are played online. Rclone -will stay around the current marker but always try its best to stay -ahead and prepare the data before. - -Plex Integration - -There is a direct integration with Plex which allows cache to detect -during reading if the file is in playback or not. This helps cache to -adapt how it queries the cloud provider depending on what is needed for. - -Scans will have a minimum amount of workers (1) while in a confirmed -playback cache will deploy the configured number of workers. - -This integration opens the doorway to additional performance -improvements which will be explored in the near future. - -Note: If Plex options are not configured, cache will function with its -configured options without adapting any of its settings. - -How to enable? Run rclone config and add all the Plex options (endpoint, -username and password) in your remote and it will be automatically -enabled. - -Affected settings: - cache-workers: Configured value during confirmed -playback or 1 all the other times - -Certificate Validation - -When the Plex server is configured to only accept secure connections, it -is possible to use .plex.direct URLs to ensure certificate validation -succeeds. These URLs are used by Plex internally to connect to the Plex -server securely. - -The format for these URLs is the following: - -https://ip-with-dots-replaced.server-hash.plex.direct:32400/ - -The ip-with-dots-replaced part can be any IPv4 address, where the dots -have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1. - -To get the server-hash part, the easiest way is to visit - -https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token - -This page will list all the available Plex servers for your account with -at least one .plex.direct link for each. Copy one URL and replace the IP -address with the desired address. This can be used as the plex_url -value. - -Known issues - -Mount and --dir-cache-time - ---dir-cache-time controls the first layer of directory caching which -works at the mount layer. Being an independent caching mechanism from -the cache backend, it will manage its own entries based on the -configured time. - -To avoid getting in a scenario where dir cache has obsolete data and -cache would have the correct one, try to set --dir-cache-time to a lower -time than --cache-info-age. Default values are already configured in -this way. - -Windows support - Experimental - -There are a couple of issues with Windows mount functionality that still -require some investigations. It should be considered as experimental -thus far as fixes come in for this OS. - -Most of the issues seem to be related to the difference between -filesystems on Linux flavors and Windows as cache is heavily dependent -on them. - -Any reports or feedback on how cache behaves on this OS is greatly -appreciated. - -- https://github.com/rclone/rclone/issues/1935 -- https://github.com/rclone/rclone/issues/1907 -- https://github.com/rclone/rclone/issues/1834 - -Risk of throttling - -Future iterations of the cache backend will make use of the pooling -functionality of the cloud provider to synchronize and at the same time -make writing through it more tolerant to failures. - -There are a couple of enhancements in track to add these but in the -meantime there is a valid concern that the expiring cache listings can -lead to cloud provider throttles or bans due to repeated queries on it -for very large mounts. - -Some recommendations: - don't use a very small interval for entry -information (--cache-info-age) - while writes aren't yet optimised, you -can still write through cache which gives you the advantage of adding -the file in the cache at the same time if configured to do so. - -Future enhancements: - -- https://github.com/rclone/rclone/issues/1937 -- https://github.com/rclone/rclone/issues/1936 - -cache and crypt - -One common scenario is to keep your data encrypted in the cloud provider -using the crypt remote. crypt uses a similar technique to wrap around an -existing remote and handles this translation in a seamless way. - -There is an issue with wrapping the remotes in this order: cloud remote --> crypt -> cache - -During testing, I experienced a lot of bans with the remotes in this -order. I suspect it might be related to how crypt opens files on the -cloud provider which makes it think we're downloading the full file -instead of small chunks. Organizing the remotes in this order yields -better results: cloud remote -> cache -> crypt - -absolute remote paths - -cache can not differentiate between relative and absolute paths for the -wrapped remote. Any path given in the remote config setting and on the -command line will be passed to the wrapped remote as is, but for storing -the chunks on disk the path will be made relative by removing any -leading / character. - -This behavior is irrelevant for most backend types, but there are -backends where a leading / changes the effective directory, e.g. in the -sftp backend paths starting with a / are relative to the root of the SSH -server and paths without are relative to the user home directory. As a -result sftp:bin and sftp:/bin will share the same cache folder, even if -they represent a different directory on the SSH server. - -Cache and Remote Control (--rc) - -Cache supports the new --rc mode in rclone and can be remote controlled -through the following end points: By default, the listener is disabled -if you do not add the flag. - -rc cache/expire - -Purge a remote from the cache backend. Supports either a directory or a -file. It supports both encrypted and unencrypted file names if cache is -wrapped by crypt. - -Params: - remote = path to remote (required) - withData = true/false to -delete cached data (chunks) as well (optional, false by default) - -Standard options - -Here are the Standard options specific to cache (Cache a remote). - ---cache-remote - -Remote to cache. - -Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", -"myremote:bucket" or maybe "myremote:" (not recommended). - -Properties: - -- Config: remote -- Env Var: RCLONE_CACHE_REMOTE -- Type: string -- Required: true - ---cache-plex-url - -The URL of the Plex server. - -Properties: - -- Config: plex_url -- Env Var: RCLONE_CACHE_PLEX_URL -- Type: string -- Required: false - ---cache-plex-username - -The username of the Plex user. - -Properties: - -- Config: plex_username -- Env Var: RCLONE_CACHE_PLEX_USERNAME -- Type: string -- Required: false - ---cache-plex-password - -The password of the Plex user. - -NB Input to this must be obscured - see rclone obscure. - -Properties: - -- Config: plex_password -- Env Var: RCLONE_CACHE_PLEX_PASSWORD -- Type: string -- Required: false - ---cache-chunk-size - -The size of a chunk (partial file data). - -Use lower numbers for slower connections. If the chunk size is changed, -any downloaded chunks will be invalid and cache-chunk-path will need to -be cleared or unexpected EOF errors will occur. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_CACHE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Mi -- Examples: - - "1M" - - 1 MiB - - "5M" - - 5 MiB - - "10M" - - 10 MiB - ---cache-info-age - -How long to cache file structure information (directory listings, file -size, times, etc.). If all write operations are done through the cache -then you can safely make this value very large as the cache store will -also be updated in real time. - -Properties: - -- Config: info_age -- Env Var: RCLONE_CACHE_INFO_AGE -- Type: Duration -- Default: 6h0m0s -- Examples: - - "1h" - - 1 hour - - "24h" - - 24 hours - - "48h" - - 48 hours - ---cache-chunk-total-size - -The total size that the chunks can take up on the local disk. - -If the cache exceeds this value then it will start to delete the oldest -chunks until it goes under this value. - -Properties: - -- Config: chunk_total_size -- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE -- Type: SizeSuffix -- Default: 10Gi -- Examples: - - "500M" - - 500 MiB - - "1G" - - 1 GiB - - "10G" - - 10 GiB - -Advanced options - -Here are the Advanced options specific to cache (Cache a remote). - ---cache-plex-token - -The plex token for authentication - auto set normally. - -Properties: - -- Config: plex_token -- Env Var: RCLONE_CACHE_PLEX_TOKEN -- Type: string -- Required: false - ---cache-plex-insecure - -Skip all certificate verification when connecting to the Plex server. - -Properties: - -- Config: plex_insecure -- Env Var: RCLONE_CACHE_PLEX_INSECURE -- Type: string -- Required: false - ---cache-db-path - -Directory to store file structure metadata DB. - -The remote name is used as the DB file name. - -Properties: - -- Config: db_path -- Env Var: RCLONE_CACHE_DB_PATH -- Type: string -- Default: "$HOME/.cache/rclone/cache-backend" - ---cache-chunk-path - -Directory to cache chunk files. - -Path to where partial file data (chunks) are stored locally. The remote -name is appended to the final path. - -This config follows the "--cache-db-path". If you specify a custom -location for "--cache-db-path" and don't specify one for -"--cache-chunk-path" then "--cache-chunk-path" will use the same path as -"--cache-db-path". - -Properties: - -- Config: chunk_path -- Env Var: RCLONE_CACHE_CHUNK_PATH -- Type: string -- Default: "$HOME/.cache/rclone/cache-backend" - ---cache-db-purge - -Clear all the cached data for this remote on start. - -Properties: - -- Config: db_purge -- Env Var: RCLONE_CACHE_DB_PURGE -- Type: bool -- Default: false - ---cache-chunk-clean-interval - -How often should the cache perform cleanups of the chunk storage. - -The default value should be ok for most people. If you find that the -cache goes over "cache-chunk-total-size" too often then try to lower -this value to force it to perform cleanups more often. - -Properties: - -- Config: chunk_clean_interval -- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL -- Type: Duration -- Default: 1m0s - ---cache-read-retries - -How many times to retry a read from a cache storage. - -Since reading from a cache stream is independent from downloading file -data, readers can get to a point where there's no more data in the -cache. Most of the times this can indicate a connectivity issue if cache -isn't able to provide file data anymore. - -For really slow connections, increase this to a point where the stream -is able to provide data but your experience will be very stuttering. - -Properties: - -- Config: read_retries -- Env Var: RCLONE_CACHE_READ_RETRIES -- Type: int -- Default: 10 - ---cache-workers - -How many workers should run in parallel to download chunks. - -Higher values will mean more parallel processing (better CPU needed) and -more concurrent requests on the cloud provider. This impacts several -aspects like the cloud provider API limits, more stress on the hardware -that rclone runs on but it also means that streams will be more fluid -and data will be available much more faster to readers. - -Note: If the optional Plex integration is enabled then this setting will -adapt to the type of reading performed and the value specified here will -be used as a maximum number of workers to use. - -Properties: - -- Config: workers -- Env Var: RCLONE_CACHE_WORKERS -- Type: int -- Default: 4 - ---cache-chunk-no-memory - -Disable the in-memory cache for storing chunks during streaming. - -By default, cache will keep file data during streaming in RAM as well to -provide it to readers as fast as possible. - -This transient data is evicted as soon as it is read and the number of -chunks stored doesn't exceed the number of workers. However, depending -on other settings like "cache-chunk-size" and "cache-workers" this -footprint can increase if there are parallel streams too (multiple files -being read at the same time). - -If the hardware permits it, use this feature to provide an overall -better performance during streaming but it can also be disabled if RAM -is not available on the local machine. - -Properties: - -- Config: chunk_no_memory -- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY -- Type: bool -- Default: false - ---cache-rps - -Limits the number of requests per second to the source FS (-1 to -disable). - -This setting places a hard limit on the number of requests per second -that cache will be doing to the cloud provider remote and try to respect -that value by setting waits between reads. - -If you find that you're getting banned or limited on the cloud provider -through cache and know that a smaller number of requests per second will -allow you to work with it then you can use this setting for that. - -A good balance of all the other settings should make this setting -useless but it is available to set for more special cases. - -NOTE: This will limit the number of requests during streams but other -API calls to the cloud provider like directory listings will still pass. - -Properties: - -- Config: rps -- Env Var: RCLONE_CACHE_RPS -- Type: int -- Default: -1 - ---cache-writes - -Cache file data on writes through the FS. - -If you need to read files immediately after you upload them through -cache you can enable this flag to have their data stored in the cache -store at the same time during upload. - -Properties: - -- Config: writes -- Env Var: RCLONE_CACHE_WRITES -- Type: bool -- Default: false - ---cache-tmp-upload-path - -Directory to keep temporary files until they are uploaded. - -This is the path where cache will use as a temporary storage for new -files that need to be uploaded to the cloud provider. - -Specifying a value will enable this feature. Without it, it is -completely disabled and files will be uploaded directly to the cloud -provider - -Properties: - -- Config: tmp_upload_path -- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH -- Type: string -- Required: false - ---cache-tmp-wait-time - -How long should files be stored in local cache before being uploaded. - -This is the duration that a file must wait in the temporary location -cache-tmp-upload-path before it is selected for upload. - -Note that only one file is uploaded at a time and it can take longer to -start the upload if a queue formed for this purpose. - -Properties: - -- Config: tmp_wait_time -- Env Var: RCLONE_CACHE_TMP_WAIT_TIME -- Type: Duration -- Default: 15s - ---cache-db-wait-time - -How long to wait for the DB to be available - 0 is unlimited. - -Only one process can have the DB open at any one time, so rclone waits -for this duration for the DB to become available before it gives an -error. - -If you set it to 0 then it will wait forever. - -Properties: - -- Config: db_wait_time -- Env Var: RCLONE_CACHE_DB_WAIT_TIME -- Type: Duration -- Default: 1s - -Backend commands - -Here are the commands specific to the cache backend. - -Run them with - - rclone backend COMMAND remote: - -The help below will explain what arguments each command takes. - -See the backend command for more info on how to pass options and -arguments. - -These can be run on a running backend using the rc command -backend/command. - -stats - -Print stats on the cache backend in JSON format. - - rclone backend stats remote: [options] [+] - -Chunker - -The chunker overlay transparently splits large files into smaller chunks -during upload to wrapped remote and transparently assembles them back -when the file is downloaded. This allows to effectively overcome size -limits imposed by storage providers. - -Configuration - -To use it, first set up the underlying remote following the -configuration instructions for that remote. You can also use a local -pathname instead of a remote. - -First check your chosen remote is working - we'll call it remote:path -here. Note that anything inside remote:path will be chunked and anything -outside won't. This means that if you are using a bucket-based remote -(e.g. S3, B2, swift) then you should probably put the bucket in the -remote s3:bucket. - -Now configure chunker using rclone config. We will call this one overlay -to separate it from the remote itself. - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> overlay - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Transparently chunk/split large files - \ "chunker" - [snip] - Storage> chunker - Remote to chunk/unchunk. - Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", - "myremote:bucket" or maybe "myremote:" (not recommended). + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). - remote> remote:path - Files larger than chunk size will be split in chunks. - Enter a size with suffix K,M,G,T. Press Enter for the default ("2G"). - chunk_size> 100M - Choose how chunker handles hash sums. All modes but "none" require metadata. - Enter a string value. Press Enter for the default ("md5"). - Choose a number from below, or type in your own value - 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise - \ "none" - 2 / MD5 for composite files - \ "md5" - 3 / SHA1 for composite files - \ "sha1" - 4 / MD5 for all files - \ "md5all" - 5 / SHA1 for all files - \ "sha1all" - 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported - \ "md5quick" - 7 / Similar to "md5quick" but prefers SHA1 over MD5 - \ "sha1quick" - hash_type> md5 + + access_key_id> accesskeyid + + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + + secret_access_key> secretaccesskey + + Region where your data stored. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Europe Region 1 + \ (eu-001) + 2 / Europe Region 2 + \ (eu-002) + 3 / US Region 1 + \ (us-001) + 4 / US Region 2 + \ (us-002) + 5 / Asia (Taiwan) + \ (tw-001) + + region > 1 + + Option endpoint. + Endpoint for Synology C2 Object Storage API. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / EU Endpoint 1 + \ (eu-001.s3.synologyc2.net) + 2 / US Endpoint 1 + \ (us-001.s3.synologyc2.net) + 3 / TW Endpoint 1 + \ (tw-001.s3.synologyc2.net) + + endpoint> 1 + + Option location_constraint. + Location constraint - must be set to match the Region. + Leave blank if not sure. Used when creating buckets only. + Enter a value. Press Enter to leave empty. + location_constraint> + Edit advanced config? (y/n) y) Yes n) No - y/n> n - Remote config - -------------------- - [overlay] - type = chunker - remote = remote:bucket - chunk_size = 100M - hash_type = md5 - -------------------- - y) Yes this is OK + y/n> y + + Option no_check_bucket. + If set, don't attempt to check the bucket exists or create it. + This can be useful when trying to minimise the number of transactions + rclone does if you know the bucket exists already. + It can also be needed if the user you are using does not have bucket + creation permissions. Before v1.52.0 this would have passed silently + due to a bug. + Enter a boolean value (true or false). Press Enter for the default (true). + + no_check_bucket> true + + Configuration complete. + Options: + - type: s3 + - provider: Synology + - region: eu-001 + - endpoint: eu-001.s3.synologyc2.net + - no_check_bucket: true + Keep this "syno" remote? + y) Yes this is OK (default) e) Edit this remote d) Delete this remote + y/e/d> y -Specifying the remote - -In normal use, make sure the remote has a : in. If you specify the -remote without a : then rclone will use a local directory of that name. -So if you use a remote of /path/to/secret/files then rclone will chunk -stuff in that directory. If you use a remote of name then rclone will -put files in a directory called name in the current directory. - -Chunking - -When rclone starts a file upload, chunker checks the file size. If it -doesn't exceed the configured chunk size, chunker will just pass the -file to the wrapped remote. If a file is large, chunker will -transparently cut data in pieces with temporary names and stream them -one by one, on the fly. Each data chunk will contain the specified -number of bytes, except for the last one which may have less data. If -file size is unknown in advance (this is called a streaming upload), -chunker will internally create a temporary copy, record its size and -repeat the above process. - -When upload completes, temporary chunk files are finally renamed. This -scheme guarantees that operations can be run in parallel and look from -outside as atomic. A similar method with hidden temporary chunks is used -for other operations (copy/move/rename, etc.). If an operation fails, -hidden chunks are normally destroyed, and the target composite file -stays intact. - -When a composite file download is requested, chunker transparently -assembles it by concatenating data chunks in order. As the split is -trivial one could even manually concatenate data chunks together to -obtain the original content. - -When the list rclone command scans a directory on wrapped remote, the -potential chunk files are accounted for, grouped and assembled into -composite directory entries. Any temporary chunks are hidden. - -List and other commands can sometimes come across composite files with -missing or invalid chunks, e.g. shadowed by like-named directory or -another file. This usually means that wrapped file system has been -directly tampered with or damaged. If chunker detects a missing chunk it -will by default print warning, skip the whole incomplete group of chunks -but proceed with current command. You can set the --chunker-fail-hard -flag to have commands abort with error message in such cases. - -Chunk names - -The default chunk name format is *.rclone_chunk.###, hence by default -chunk names are BIG_FILE_NAME.rclone_chunk.001, -BIG_FILE_NAME.rclone_chunk.002 etc. You can configure another name -format using the name_format configuration file option. The format uses -asterisk * as a placeholder for the base file name and one or more -consecutive hash characters # as a placeholder for sequential chunk -number. There must be one and only one asterisk. The number of -consecutive hash characters defines the minimum length of a string -representing a chunk number. If decimal chunk number has less digits -than the number of hashes, it is left-padded by zeros. If the decimal -string is longer, it is left intact. By default numbering starts from 1 -but there is another option that allows user to start from 0, e.g. for -compatibility with legacy software. - -For example, if name format is big_*-##.part and original file name is -data.txt and numbering starts from 0, then the first chunk will be named -big_data.txt-00.part, the 99th chunk will be big_data.txt-98.part and -the 302nd chunk will become big_data.txt-301.part. - -Note that list assembles composite directory entries only when chunk -names match the configured format and treats non-conforming file names -as normal non-chunked files. - -When using norename transactions, chunk names will additionally have a -unique file version suffix. For example, -BIG_FILE_NAME.rclone_chunk.001_bp562k. - -Metadata - -Besides data chunks chunker will by default create metadata object for a -composite file. The object is named after the original file. Chunker -allows user to disable metadata completely (the none format). Note that -metadata is normally not created for files smaller than the configured -chunk size. This may change in future rclone releases. - -Simple JSON metadata format - -This is the default format. It supports hash sums and chunk validation -for composite files. Meta objects carry the following fields: - -- ver - version of format, currently 1 -- size - total size of composite file -- nchunks - number of data chunks in file -- md5 - MD5 hashsum of composite file (if present) -- sha1 - SHA1 hashsum (if present) -- txn - identifies current version of the file - -There is no field for composite file name as it's simply equal to the -name of meta object on the wrapped remote. Please refer to respective -sections for details on hashsums and modified time handling. - -No metadata - -You can disable meta objects by setting the meta format option to none. -In this mode chunker will scan directory for all files that follow -configured chunk name format, group them by detecting chunks with the -same base name and show group names as virtual composite files. This -method is more prone to missing chunk errors (especially missing last -chunk) than format with metadata enabled. - -Hashsums - -Chunker supports hashsums only when a compatible metadata is present. -Hence, if you choose metadata format of none, chunker will report -hashsum as UNSUPPORTED. - -Please note that by default metadata is stored only for composite files. -If a file is smaller than configured chunk size, chunker will -transparently redirect hash requests to wrapped remote, so support -depends on that. You will see the empty string as a hashsum of requested -type for small files if the wrapped remote doesn't support it. - -Many storage backends support MD5 and SHA1 hash types, so does chunker. -With chunker you can choose one or another but not both. MD5 is set by -default as the most supported type. Since chunker keeps hashes for -composite files and falls back to the wrapped remote hash for -non-chunked ones, we advise you to choose the same hash type as -supported by wrapped remote so that your file listings look coherent. - -If your storage backend does not support MD5 or SHA1 but you need -consistent file hashing, configure chunker with md5all or sha1all. These -two modes guarantee given hash for all files. If wrapped remote doesn't -support it, chunker will then add metadata to all files, even small. -However, this can double the amount of small files in storage and incur -additional service charges. You can even use chunker to force md5/sha1 -support in any other remote at expense of sidecar meta objects by -setting e.g. hash_type=sha1all to force hashsums and chunk_size=1P to -effectively disable chunking. - -Normally, when a file is copied to chunker controlled remote, chunker -will ask the file source for compatible file hash and revert to -on-the-fly calculation if none is found. This involves some CPU overhead -but provides a guarantee that given hashsum is available. Also, chunker -will reject a server-side copy or move operation if source and -destination hashsum types are different resulting in the extra network -bandwidth, too. In some rare cases this may be undesired, so chunker -provides two optional choices: sha1quick and md5quick. If the source -does not support primary hash type and the quick mode is enabled, -chunker will try to fall back to the secondary type. This will save CPU -and bandwidth but can result in empty hashsums at destination. Beware of -consequences: the sync command will revert (sometimes silently) to -time/size comparison if compatible hashsums between source and target -are not found. - -Modified time - -Chunker stores modification times using the wrapped remote so support -depends on that. For a small non-chunked file the chunker overlay simply -manipulates modification time of the wrapped remote file. For a -composite file with metadata chunker will get and set modification time -of the metadata object on the wrapped remote. If file is chunked but -metadata format is none then chunker will use modification time of the -first data chunk. - -Migrations - -The idiomatic way to migrate to a different chunk size, hash type, -transaction style or chunk naming scheme is to: - -- Collect all your chunked files under a directory and have your - chunker remote point to it. -- Create another directory (most probably on the same cloud storage) - and configure a new remote with desired metadata format, hash type, - chunk naming etc. -- Now run rclone sync --interactive oldchunks: newchunks: and all your - data will be transparently converted in transfer. This may take some - time, yet chunker will try server-side copy if possible. -- After checking data integrity you may remove configuration section - of the old remote. - -If rclone gets killed during a long operation on a big composite file, -hidden temporary chunks may stay in the directory. They will not be -shown by the list command but will eat up your account quota. Please -note that the deletefile command deletes only active chunks of a file. -As a workaround, you can use remote of the wrapped file system to see -them. An easy way to get rid of hidden garbage is to copy littered -directory somewhere using the chunker remote and purge the original -directory. The copy command will copy only active chunks while the purge -will remove everything including garbage. - -Caveats and Limitations - -Chunker requires wrapped remote to support server-side move (or copy + -delete) operations, otherwise it will explicitly refuse to start. This -is because it internally renames temporary chunk files to their final -names when an operation completes successfully. - -Chunker encodes chunk number in file name, so with default name_format -setting it adds 17 characters. Also chunker adds 7 characters of -temporary suffix during operations. Many file systems limit base file -name without path by 255 characters. Using rclone's crypt remote as a -base file system limits file name by 143 characters. Thus, maximum name -length is 231 for most files and 119 for chunker-over-crypt. A user in -need can change name format to e.g. *.rcc## and save 10 characters -(provided at most 99 chunks per file). - -Note that a move implemented using the copy-and-delete method may incur -double charging with some cloud storage providers. - -Chunker will not automatically rename existing chunks when you run -rclone config on a live remote and change the chunk name format. Beware -that in result of this some files which have been treated as chunks -before the change can pop up in directory listings as normal files and -vice versa. The same warning holds for the chunk size. If you -desperately need to change critical chunking settings, you should run -data migration as described above. - -If wrapped remote is case insensitive, the chunker overlay will inherit -that property (so you can't have a file called "Hello.doc" and -"hello.doc" in the same directory). - -Chunker included in rclone releases up to v1.54 can sometimes fail to -detect metadata produced by recent versions of rclone. We recommend -users to keep rclone up-to-date to avoid data corruption. - -Changing transactions is dangerous and requires explicit migration. - -Standard options - -Here are the Standard options specific to chunker (Transparently -chunk/split large files). - ---chunker-remote - -Remote to chunk/unchunk. - -Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", -"myremote:bucket" or maybe "myremote:" (not recommended). - -Properties: - -- Config: remote -- Env Var: RCLONE_CHUNKER_REMOTE -- Type: string -- Required: true - ---chunker-chunk-size - -Files larger than chunk size will be split in chunks. + # Backblaze B2 -Properties: + B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/). -- Config: chunk_size -- Env Var: RCLONE_CHUNKER_CHUNK_SIZE -- Type: SizeSuffix -- Default: 2Gi - ---chunker-hash-type - -Choose how chunker handles hash sums. - -All modes but "none" require metadata. + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. -Properties: + ## Configuration -- Config: hash_type -- Env Var: RCLONE_CHUNKER_HASH_TYPE -- Type: string -- Default: "md5" -- Examples: - - "none" - - Pass any hash supported by wrapped remote for non-chunked - files. - - Return nothing otherwise. - - "md5" - - MD5 for composite files. - - "sha1" - - SHA1 for composite files. - - "md5all" - - MD5 for all files. - - "sha1all" - - SHA1 for all files. - - "md5quick" - - Copying a file to chunker will request MD5 from the source. - - Falling back to SHA1 if unsupported. - - "sha1quick" - - Similar to "md5quick" but prefers SHA1 over MD5. - -Advanced options - -Here are the Advanced options specific to chunker (Transparently -chunk/split large files). - ---chunker-name-format - -String format of chunk file names. - -The two placeholders are: base file name (*) and chunk number (#...). -There must be one and only one asterisk and one or more consecutive hash -characters. If chunk number has less digits than the number of hashes, -it is left-padded by zeros. If there are more digits in the number, they -are left as is. Possible chunk files are ignored if their name does not -match given format. + Here is an example of making a b2 configuration. First run -Properties: + rclone config -- Config: name_format -- Env Var: RCLONE_CHUNKER_NAME_FORMAT -- Type: string -- Default: "*.rclone_chunk.###" + This will guide you through an interactive setup process. To authenticate + you will either need your Account ID (a short hex number) and Master + Application Key (a long hex number) OR an Application Key, which is the + recommended method. See below for further details on generating and using + an Application Key. ---chunker-start-from +No remotes found, make a new one? n) New remote q) Quit config n/q> n +name> remote Type of storage to configure. Choose a number from below, +or type in your own value [snip] XX / Backblaze B2  "b2" [snip] Storage> +b2 Account ID or Application Key ID account> 123456789abc Application +Key key> 0123456789abcdef0123456789abcdef0123456789 Endpoint for the +service - leave blank normally. endpoint> Remote config +-------------------- [remote] account = 123456789abc key = +0123456789abcdef0123456789abcdef0123456789 endpoint = +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y -Minimum valid chunk number. Usually 0 or 1. -By default chunk numbers start from 1. + This remote is called `remote` and can now be used like this -Properties: + See all buckets -- Config: start_from -- Env Var: RCLONE_CHUNKER_START_FROM -- Type: int -- Default: 1 + rclone lsd remote: ---chunker-meta-format + Create a new bucket -Format of the metadata object or "none". - -By default "simplejson". Metadata is a small JSON file named after the -composite file. - -Properties: - -- Config: meta_format -- Env Var: RCLONE_CHUNKER_META_FORMAT -- Type: string -- Default: "simplejson" -- Examples: - - "none" - - Do not use metadata files at all. - - Requires hash type "none". - - "simplejson" - - Simple JSON supports hash sums and chunk validation. - - - - It has the following fields: ver, size, nchunks, md5, sha1. - ---chunker-fail-hard - -Choose how chunker should handle files with missing or invalid chunks. - -Properties: - -- Config: fail_hard -- Env Var: RCLONE_CHUNKER_FAIL_HARD -- Type: bool -- Default: false -- Examples: - - "true" - - Report errors and abort current command. - - "false" - - Warn user, skip incomplete file and proceed. - ---chunker-transactions - -Choose how chunker should handle temporary files during transactions. - -Properties: - -- Config: transactions -- Env Var: RCLONE_CHUNKER_TRANSACTIONS -- Type: string -- Default: "rename" -- Examples: - - "rename" - - Rename temporary files after a successful transaction. - - "norename" - - Leave temporary file names and write transaction ID to - metadata file. - - Metadata is required for no rename transactions (meta format - cannot be "none"). - - If you are using norename transactions you should be careful - not to downgrade Rclone - - as older versions of Rclone don't support this transaction - style and will misinterpret - - files manipulated by norename transactions. - - This method is EXPERIMENTAL, don't use on production - systems. - - "auto" - - Rename or norename will be used depending on capabilities of - the backend. - - If meta format is set to "none", rename transactions will - always be used. - - This method is EXPERIMENTAL, don't use on production - systems. - -Citrix ShareFile - -Citrix ShareFile is a secure file sharing and transfer service aimed as -business. - -Configuration - -The initial setup for Citrix ShareFile involves getting a token from -Citrix ShareFile which you can in your browser. rclone config walks you -through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - XX / Citrix Sharefile - \ "sharefile" - Storage> sharefile - ** See help for sharefile backend at: https://rclone.org/sharefile/ ** - - ID of the root folder + rclone mkdir remote:bucket + + List the contents of a bucket + + rclone ls remote:bucket + + Sync `/home/local/directory` to the remote bucket, deleting any + excess files in the bucket. + + rclone sync --interactive /home/local/directory remote:bucket + + ### Application Keys + + B2 supports multiple [Application Keys for different access permission + to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html). + + You can use these with rclone too; you will need to use rclone version 1.43 + or later. + + Follow Backblaze's docs to create an Application Key with the required + permission and add the `applicationKeyId` as the `account` and the + `Application Key` itself as the `key`. + + Note that you must put the _applicationKeyId_ as the `account` – you + can't use the master Account ID. If you try then B2 will return 401 + errors. + + ### --fast-list + + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. + + ### Modified time + + The modified time is stored as metadata on the object as + `X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01 + in the Backblaze standard. Other tools should be able to use this as + a modified time. + + Modified times are used in syncing and are fully supported. Note that + if a modification time needs to be updated on an object then it will + create a new version of the object. + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + Note that in 2020-05 Backblaze started allowing \ characters in file + names. Rclone hasn't changed its encoding as this could cause syncs to + re-transfer files. If you want rclone not to replace \ then see the + `--b2-encoding` flag below and remove the `BackSlash` from the + string. This can be set in the config. + + ### SHA1 checksums + + The SHA1 checksums of the files are checked on upload and download and + will be used in the syncing process. + + Large files (bigger than the limit in `--b2-upload-cutoff`) which are + uploaded in chunks will store their SHA1 on the object as + `X-Bz-Info-large_file_sha1` as recommended by Backblaze. + + For a large file to be uploaded with an SHA1 checksum, the source + needs to support SHA1 checksums. The local disk supports SHA1 + checksums so large file transfers from local disk will have an SHA1. + See [the overview](https://rclone.org/overview/#features) for exactly which remotes + support SHA1. + + Sources which don't support SHA1, in particular `crypt` will upload + large files without SHA1 checksums. This may be fixed in the future + (see [#1767](https://github.com/rclone/rclone/issues/1767)). + + Files sizes below `--b2-upload-cutoff` will always have an SHA1 + regardless of the source. + + ### Transfers + + Backblaze recommends that you do lots of transfers simultaneously for + maximum speed. In tests from my SSD equipped laptop the optimum + setting is about `--transfers 32` though higher numbers may be used + for a slight speed improvement. The optimum number for you may vary + depending on your hardware, how big the files are, how much you want + to load your computer, etc. The default of `--transfers 4` is + definitely too low for Backblaze B2 though. + + Note that uploading big files (bigger than 200 MiB by default) will use + a 96 MiB RAM buffer by default. There can be at most `--transfers` of + these in use at any moment, so this sets the upper limit on the memory + used. + + ### Versions + + When rclone uploads a new version of a file it creates a [new version + of it](https://www.backblaze.com/b2/docs/file_versions.html). + Likewise when you delete a file, the old version will be marked hidden + and still be available. Conversely, you may opt in to a "hard delete" + of files with the `--b2-hard-delete` flag which would permanently remove + the file instead of hiding it. + + Old versions of files, where available, are visible using the + `--b2-versions` flag. + + It is also possible to view a bucket as it was at a certain point in time, + using the `--b2-version-at` flag. This will show the file versions as they + were at that time, showing files that have been deleted afterwards, and + hiding files that were created since. + + If you wish to remove all the old versions then you can use the + `rclone cleanup remote:bucket` command which will delete all the old + versions of files, leaving the current ones intact. You can also + supply a path and only old versions under that path will be deleted, + e.g. `rclone cleanup remote:bucket/path/to/stuff`. + + Note that `cleanup` will remove partially uploaded files from the bucket + if they are more than a day old. + + When you `purge` a bucket, the current and the old versions will be + deleted then the bucket will be deleted. + + However `delete` will cause the current versions of the files to + become hidden old versions. + + Here is a session showing the listing and retrieval of an old + version followed by a `cleanup` of the old versions. + + Show current version and all the versions with `--b2-versions` flag. + +$ rclone -q ls b2:cleanup-test 9 one.txt + +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 +one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 +one-v2016-07-02-155621-000.txt + + + Retrieve an old version + +$ rclone -q --b2-versions copy +b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp + +$ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul +2 17:46 /tmp/one-v2016-07-04-141003-000.txt + + + Clean up all the old versions and show that they've gone. + +$ rclone -q cleanup b2:cleanup-test + +$ rclone -q ls b2:cleanup-test 9 one.txt + +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt + + + #### Versions naming caveat + + When using `--b2-versions` flag rclone is relying on the file name + to work out whether the objects are versions or not. Versions' names + are created by inserting timestamp between file name and its extension. + + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt + + If there are real files present with the same names as versions, then + behaviour of `--b2-versions` can be unpredictable. + + ### Data usage + + It is useful to know how many requests are sent to the server in different scenarios. + + All copy commands send the following 4 requests: + +/b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket +/b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names + + + The `b2_list_file_names` request will be sent once for every 1k files + in the remote path, providing the checksum and modification time of + the listed files. As of version 1.33 issue + [#818](https://github.com/rclone/rclone/issues/818) causes extra requests + to be sent when using B2 with Crypt. When a copy operation does not + require any files to be uploaded, no more requests will be sent. + + Uploading files that do not require chunking, will send 2 requests per + file upload: + +/b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ + + + Uploading files requiring chunking, will send 2 requests (one each to + start and finish the upload) and another 2 requests for each chunk: + +/b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url +/b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file + + + #### Versions + + Versions can be viewed with the `--b2-versions` flag. When it is set + rclone will show and act on older versions of files. For example + + Listing without `--b2-versions` + +$ rclone -q ls b2:cleanup-test 9 one.txt + + + And with + +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 +one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 +one-v2016-07-02-155621-000.txt + + + Showing that the current version is unchanged but older versions can + be seen. These have the UTC date that they were uploaded to the + server to the nearest millisecond appended to them. + + Note that when using `--b2-versions` no file write operations are + permitted, so you can't upload files or delete them. + + ### B2 and rclone link + + Rclone supports generating file share links for private B2 buckets. + They can either be for a file for example: + +./rclone link B2:bucket/path/to/file.txt +https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx + + + or if run on a directory you will get: + +./rclone link B2:bucket/path +https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx + + + you can then use the authorization token (the part of the url from the + `?Authorization=` on) on any file path under that directory. For example: + +https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx + + + + ### Standard options + + Here are the Standard options specific to b2 (Backblaze B2). + + #### --b2-account + + Account ID or Application Key ID. + + Properties: + + - Config: account + - Env Var: RCLONE_B2_ACCOUNT + - Type: string + - Required: true + + #### --b2-key + + Application Key. + + Properties: + + - Config: key + - Env Var: RCLONE_B2_KEY + - Type: string + - Required: true + + #### --b2-hard-delete + + Permanently delete files on remote removal, otherwise hide files. + + Properties: + + - Config: hard_delete + - Env Var: RCLONE_B2_HARD_DELETE + - Type: bool + - Default: false + + ### Advanced options + + Here are the Advanced options specific to b2 (Backblaze B2). + + #### --b2-endpoint + + Endpoint for the service. + + Leave blank normally. + + Properties: + + - Config: endpoint + - Env Var: RCLONE_B2_ENDPOINT + - Type: string + - Required: false + + #### --b2-test-mode + + A flag string for X-Bz-Test-Mode header for debugging. + + This is for debugging purposes only. Setting it to one of the strings + below will cause b2 to return specific errors: + + * "fail_some_uploads" + * "expire_some_account_authorization_tokens" + * "force_cap_exceeded" + + These will be set in the "X-Bz-Test-Mode" header which is documented + in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html). + + Properties: + + - Config: test_mode + - Env Var: RCLONE_B2_TEST_MODE + - Type: string + - Required: false + + #### --b2-versions + + Include old versions in directory listings. + + Note that when using this no file write operations are permitted, + so you can't upload files or delete them. + + Properties: + + - Config: versions + - Env Var: RCLONE_B2_VERSIONS + - Type: bool + - Default: false + + #### --b2-version-at + + Show file versions as they were at the specified time. + + Note that when using this no file write operations are permitted, + so you can't upload files or delete them. + + Properties: + + - Config: version_at + - Env Var: RCLONE_B2_VERSION_AT + - Type: Time + - Default: off + + #### --b2-upload-cutoff + + Cutoff for switching to chunked upload. + + Files above this size will be uploaded in chunks of "--b2-chunk-size". + + This value should be set no larger than 4.657 GiB (== 5 GB). + + Properties: + + - Config: upload_cutoff + - Env Var: RCLONE_B2_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 200Mi + + #### --b2-copy-cutoff + + Cutoff for switching to multipart copy. + + Any files larger than this that need to be server-side copied will be + copied in chunks of this size. + + The minimum is 0 and the maximum is 4.6 GiB. + + Properties: + + - Config: copy_cutoff + - Env Var: RCLONE_B2_COPY_CUTOFF + - Type: SizeSuffix + - Default: 4Gi + + #### --b2-chunk-size + + Upload chunk size. + + When uploading large files, chunk the file into this size. + + Must fit in memory. These chunks are buffered in memory and there + might a maximum of "--transfers" chunks in progress at once. + + 5,000,000 Bytes is the minimum size. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_B2_CHUNK_SIZE + - Type: SizeSuffix + - Default: 96Mi + + #### --b2-upload-concurrency + + Concurrency for multipart uploads. + + This is the number of chunks of the same file that are uploaded + concurrently. + + Note that chunks are stored in memory and there may be up to + "--transfers" * "--b2-upload-concurrency" chunks stored at once + in memory. + + Properties: + + - Config: upload_concurrency + - Env Var: RCLONE_B2_UPLOAD_CONCURRENCY + - Type: int + - Default: 16 + + #### --b2-disable-checksum + + Disable checksums for large (> upload cutoff) files. + + Normally rclone will calculate the SHA1 checksum of the input before + uploading it so it can add it to metadata on the object. This is great + for data integrity checking but can cause long delays for large files + to start uploading. + + Properties: + + - Config: disable_checksum + - Env Var: RCLONE_B2_DISABLE_CHECKSUM + - Type: bool + - Default: false + + #### --b2-download-url + + Custom endpoint for downloads. + + This is usually set to a Cloudflare CDN URL as Backblaze offers + free egress for data downloaded through the Cloudflare network. + Rclone works with private buckets by sending an "Authorization" header. + If the custom endpoint rewrites the requests for authentication, + e.g., in Cloudflare Workers, this header needs to be handled properly. + Leave blank if you want to use the endpoint provided by Backblaze. + + The URL provided here SHOULD have the protocol and SHOULD NOT have + a trailing slash or specify the /file/bucket subpath as rclone will + request files with "{download_url}/file/{bucket_name}/{path}". + + Example: + > https://mysubdomain.mydomain.tld + (No trailing "/", "file" or "bucket") + + Properties: + + - Config: download_url + - Env Var: RCLONE_B2_DOWNLOAD_URL + - Type: string + - Required: false + + #### --b2-download-auth-duration + + Time before the authorization token will expire in s or suffix ms|s|m|h|d. + + The duration before the download authorization token will expire. + The minimum value is 1 second. The maximum value is one week. + + Properties: + + - Config: download_auth_duration + - Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION + - Type: Duration + - Default: 1w + + #### --b2-memory-pool-flush-time + + How often internal memory buffer pools will be flushed. (no longer used) + + Properties: + + - Config: memory_pool_flush_time + - Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME + - Type: Duration + - Default: 1m0s + + #### --b2-memory-pool-use-mmap + + Whether to use mmap buffers in internal memory pool. (no longer used) + + Properties: + + - Config: memory_pool_use_mmap + - Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP + - Type: bool + - Default: false + + #### --b2-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_B2_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + + ## Limitations + + `rclone about` is not supported by the B2 backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. + + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + # Box + + Paths are specified as `remote:path` + + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + + The initial setup for Box involves getting a token from Box which you + can do either in your browser, or with a config.json downloaded from Box + to use JWT authentication. `rclone config` walks you through it. + + ## Configuration + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Box  "box" [snip] Storage> box Box App Client Id - leave blank +normally. client_id> Box App Client Secret - leave blank normally. +client_secret> Box App config.json location Leave blank normally. Enter +a string value. Press Enter for the default (""). box_config_file> Box +App Primary Access Token Leave blank normally. Enter a string value. +Press Enter for the default (""). access_token> + +Enter a string value. Press Enter for the default ("user"). Choose a +number from below, or type in your own value 1 / Rclone should act on +behalf of a user  "user" 2 / Rclone should act on behalf of a service +account  "enterprise" box_sub_type> Remote config Use web browser to +automatically authenticate rclone with remote? * Say Y if the machine +running rclone has a web browser you can use * Say N if running rclone +on a (remote) machine without web browser access If not sure try Y. If Y +failed, try N. y) Yes n) No y/n> y If your browser doesn't open +automatically go to the following link: http://127.0.0.1:53682/auth Log +in and authorize rclone for access Waiting for code... Got code +-------------------- [remote] client_id = client_secret = token = +{"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. + + Note that rclone runs a webserver on your local machine to collect the + token as returned from Box. This only runs from the moment it opens + your browser to the moment you get back the verification code. This + is on `http://127.0.0.1:53682/` and this it may require you to unblock + it temporarily if you are running a host firewall. + + Once configured you can then use `rclone` like this, + + List directories in top level of your Box + + rclone lsd remote: + + List all the files in your Box + + rclone ls remote: + + To copy a local directory to an Box directory called backup + + rclone copy /home/source remote:backup + + ### Using rclone with an Enterprise account with SSO + + If you have an "Enterprise" account type with Box with single sign on + (SSO), you need to create a password to use Box with rclone. This can + be done at your Enterprise Box account by going to Settings, "Account" + Tab, and then set the password in the "Authentication" field. + + Once you have done this, you can setup your Enterprise Box account + using the same procedure detailed above in the, using the password you + have just set. + + ### Invalid refresh token + + According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): + + > Each refresh_token is valid for one use in 60 days. + + This means that if you + + * Don't use the box remote for 60 days + * Copy the config file with a box refresh token in and use it in two places + * Get an error on a token refresh + + then rclone will return an error which includes the text `Invalid + refresh token`. + + To fix this you will need to use oauth2 again to update the refresh + token. You can use the methods in [the remote setup + docs](https://rclone.org/remote_setup/), bearing in mind that if you use the copy the + config file method, you should not use that remote on the computer you + did the authentication on. + + Here is how to do it. + +$ rclone config Current remotes: + +Name Type ==== ==== remote box + +e) Edit existing remote +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in + an existing value 1 > remote remote> remote -------------------- + [remote] type = box token = + {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} + -------------------- Edit remote Value "client_id" = "" Edit? (y/n)> +l) Yes +m) No y/n> n Value "client_secret" = "" Edit? (y/n)> +n) Yes +o) No y/n> n Remote config Already have a token - refresh? +p) Yes +q) No y/n> y Use web browser to automatically authenticate rclone with + remote? + +- Say Y if the machine running rclone has a web browser you can use +- Say N if running rclone on a (remote) machine without web browser + access If not sure try Y. If Y failed, try N. + +y) Yes +z) No y/n> y If your browser doesn't open automatically go to the + following link: http://127.0.0.1:53682/auth Log in and authorize + rclone for access Waiting for code... Got code -------------------- + [remote] type = box token = + {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} + -------------------- +a) Yes this is OK +b) Edit this remote +c) Delete this remote y/e/d> y + + + ### Modified time and hashes + + Box allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. + + Box supports SHA1 type hashes, so you can use the `--checksum` + flag. + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | + + File names can also not end with the following characters. + These only get replaced if they are the last character in the name: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + ### Transfers + + For files above 50 MiB rclone will use a chunked transfer. Rclone will + upload up to `--transfers` chunks at the same time (shared among all + the multipart uploads). Chunks are buffered in memory and are + normally 8 MiB so increasing `--transfers` will increase memory use. + + ### Deleting files + + Depending on the enterprise settings for your user, the item will + either be actually deleted from Box or moved to the trash. + + Emptying the trash is supported via the rclone however cleanup command + however this deletes every trashed file and folder individually so it + may take a very long time. + Emptying the trash via the WebUI does not have this limitation + so it is advised to empty the trash via the WebUI. + + ### Root folder ID + + You can set the `root_folder_id` for rclone. This is the directory + (identified by its `Folder ID`) that rclone considers to be the root + of your Box drive. + + Normally you will leave this blank and rclone will determine the + correct root to use itself. + + However you can set this to restrict rclone to a specific folder + hierarchy. + + In order to do this you will have to find the `Folder ID` of the + directory you wish rclone to display. This will be the last segment + of the URL when you open the relevant folder in the Box web + interface. + + So if the folder you want rclone to use has a URL which looks like + `https://app.box.com/folder/11xxxxxxxxx8` + in the browser, then you use `11xxxxxxxxx8` as + the `root_folder_id` in the config. + + + ### Standard options + + Here are the Standard options specific to box (Box). + + #### --box-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_BOX_CLIENT_ID + - Type: string + - Required: false + + #### --box-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_BOX_CLIENT_SECRET + - Type: string + - Required: false + + #### --box-box-config-file + + Box App config.json location + + Leave blank normally. + + Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. + + Properties: + + - Config: box_config_file + - Env Var: RCLONE_BOX_BOX_CONFIG_FILE + - Type: string + - Required: false + + #### --box-access-token + + Box App Primary Access Token + + Leave blank normally. + + Properties: + + - Config: access_token + - Env Var: RCLONE_BOX_ACCESS_TOKEN + - Type: string + - Required: false + + #### --box-box-sub-type + + + + Properties: + + - Config: box_sub_type + - Env Var: RCLONE_BOX_BOX_SUB_TYPE + - Type: string + - Default: "user" + - Examples: + - "user" + - Rclone should act on behalf of a user. + - "enterprise" + - Rclone should act on behalf of a service account. + + ### Advanced options + + Here are the Advanced options specific to box (Box). + + #### --box-token + + OAuth Access Token as a JSON blob. + + Properties: + + - Config: token + - Env Var: RCLONE_BOX_TOKEN + - Type: string + - Required: false + + #### --box-auth-url + + Auth server URL. + + Leave blank to use the provider defaults. + + Properties: + + - Config: auth_url + - Env Var: RCLONE_BOX_AUTH_URL + - Type: string + - Required: false + + #### --box-token-url + + Token server url. + + Leave blank to use the provider defaults. + + Properties: + + - Config: token_url + - Env Var: RCLONE_BOX_TOKEN_URL + - Type: string + - Required: false + + #### --box-root-folder-id + + Fill in for rclone to use a non root folder as its starting point. + + Properties: + + - Config: root_folder_id + - Env Var: RCLONE_BOX_ROOT_FOLDER_ID + - Type: string + - Default: "0" + + #### --box-upload-cutoff + + Cutoff for switching to multipart upload (>= 50 MiB). + + Properties: + + - Config: upload_cutoff + - Env Var: RCLONE_BOX_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 50Mi + + #### --box-commit-retries + + Max number of times to try committing a multipart file. + + Properties: + + - Config: commit_retries + - Env Var: RCLONE_BOX_COMMIT_RETRIES + - Type: int + - Default: 100 + + #### --box-list-chunk + + Size of listing chunk 1-1000. + + Properties: + + - Config: list_chunk + - Env Var: RCLONE_BOX_LIST_CHUNK + - Type: int + - Default: 1000 + + #### --box-owned-by + + Only show items owned by the login (email address) passed in. + + Properties: + + - Config: owned_by + - Env Var: RCLONE_BOX_OWNED_BY + - Type: string + - Required: false + + #### --box-impersonate + + Impersonate this user ID when using a service account. + + Settng this flag allows rclone, when using a JWT service account, to + act on behalf of another user by setting the as-user header. + + The user ID is the Box identifier for a user. User IDs can found for + any user via the GET /users endpoint, which is only available to + admins, or by calling the GET /users/me endpoint with an authenticated + user session. + + See: https://developer.box.com/guides/authentication/jwt/as-user/ + + + Properties: + + - Config: impersonate + - Env Var: RCLONE_BOX_IMPERSONATE + - Type: string + - Required: false + + #### --box-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_BOX_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot + + + + ## Limitations + + Note that Box is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". + + Box file names can't have the `\` character in. rclone maps this to + and from an identical looking unicode equivalent `\` (U+FF3C Fullwidth + Reverse Solidus). + + Box only supports filenames up to 255 characters in length. + + Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone. + + `rclone about` is not supported by the Box backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. + + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + ## Get your own Box App ID + + Here is how to create your own Box App ID for rclone: + + 1. Go to the [Box Developer Console](https://app.box.com/developers/console) + and login, then click `My Apps` on the sidebar. Click `Create New App` + and select `Custom App`. + + 2. In the first screen on the box that pops up, you can pretty much enter + whatever you want. The `App Name` can be whatever. For `Purpose` choose + automation to avoid having to fill out anything else. Click `Next`. + + 3. In the second screen of the creation screen, select + `User Authentication (OAuth 2.0)`. Then click `Create App`. + + 4. You should now be on the `Configuration` tab of your new app. If not, + click on it at the top of the webpage. Copy down `Client ID` + and `Client Secret`, you'll need those for rclone. + + 5. Under "OAuth 2.0 Redirect URI", add `http://127.0.0.1:53682/` + + 6. For `Application Scopes`, select `Read all files and folders stored in Box` + and `Write all files and folders stored in box` (assuming you want to do both). + Leave others unchecked. Click `Save Changes` at the top right. + + # Cache + + The `cache` remote wraps another existing remote and stores file structure + and its data for long running tasks like `rclone mount`. + + ## Status + + The cache backend code is working but it currently doesn't + have a maintainer so there are [outstanding bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) which aren't getting fixed. + + The cache backend is due to be phased out in favour of the VFS caching + layer eventually which is more tightly integrated into rclone. + + Until this happens we recommend only using the cache backend if you + find you can't work without it. There are many docs online describing + the use of the cache backend to minimize API hits and by-and-large + these are out of date and the cache backend isn't needed in those + scenarios any more. + + ## Configuration + + To get started you just need to have an existing remote which can be configured + with `cache`. + + Here is an example of how to make a remote called `test-cache`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote r) Rename remote c) Copy +remote s) Set configuration password q) Quit config n/r/c/s/q> n name> +test-cache Type of storage to configure. Choose a number from below, or +type in your own value [snip] XX / Cache a remote  "cache" [snip] +Storage> cache Remote to cache. Normally should contain a ':' and a +path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe +"myremote:" (not recommended). remote> local:/test Optional: The URL of +the Plex server plex_url> http://127.0.0.1:32400 Optional: The username +of the Plex user plex_username> dummyusername Optional: The password of +the Plex user y) Yes type in my own password g) Generate random password +n) No leave this optional password blank y/g/n> y Enter the password: +password: Confirm the password: password: The size of a chunk. Lower +value good for slow connections but can affect seamless reading. +Default: 5M Choose a number from below, or type in your own value 1 / 1 +MiB  "1M" 2 / 5 MiB  "5M" 3 / 10 MiB  "10M" chunk_size> 2 How much time +should object info (file size, file hashes, etc.) be stored in cache. +Use a very high value if you don't plan on changing the source FS from +outside the cache. Accepted units are: "s", "m", "h". Default: 5m Choose +a number from below, or type in your own value 1 / 1 hour  "1h" 2 / 24 +hours  "24h" 3 / 24 hours  "48h" info_age> 2 The maximum size of stored +chunks. When the storage grows beyond this size, the oldest chunks will +be deleted. Default: 10G Choose a number from below, or type in your own +value 1 / 500 MiB  "500M" 2 / 1 GiB  "1G" 3 / 10 GiB  "10G" +chunk_total_size> 3 Remote config -------------------- [test-cache] +remote = local:/test plex_url = http://127.0.0.1:32400 plex_username = +dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age += 48h chunk_total_size = 10G + + + You can then use it like this, + + List directories in top level of your drive + + rclone lsd test-cache: + + List all the files in your drive + + rclone ls test-cache: + + To start a cached mount + + rclone mount --allow-other test-cache: /var/tmp/test-cache + + ### Write Features ### + + ### Offline uploading ### + + In an effort to make writing through cache more reliable, the backend + now supports this feature which can be activated by specifying a + `cache-tmp-upload-path`. + + A files goes through these states when using this feature: + + 1. An upload is started (usually by copying a file on the cache remote) + 2. When the copy to the temporary location is complete the file is part + of the cached remote and looks and behaves like any other file (reading included) + 3. After `cache-tmp-wait-time` passes and the file is next in line, `rclone move` + is used to move the file to the cloud provider + 4. Reading the file still works during the upload but most modifications on it will be prohibited + 5. Once the move is complete the file is unlocked for modifications as it + becomes as any other regular file + 6. If the file is being read through `cache` when it's actually + deleted from the temporary path then `cache` will simply swap the source + to the cloud provider without interrupting the reading (small blip can happen though) + + Files are uploaded in sequence and only one file is uploaded at a time. + Uploads will be stored in a queue and be processed based on the order they were added. + The queue and the temporary storage is persistent across restarts but + can be cleared on startup with the `--cache-db-purge` flag. + + ### Write Support ### + + Writes are supported through `cache`. + One caveat is that a mounted cache remote does not add any retry or fallback + mechanism to the upload operation. This will depend on the implementation + of the wrapped remote. Consider using `Offline uploading` for reliable writes. + + One special case is covered with `cache-writes` which will cache the file + data at the same time as the upload when it is enabled making it available + from the cache store immediately once the upload is finished. + + ### Read Features ### + + #### Multiple connections #### + + To counter the high latency between a local PC where rclone is running + and cloud providers, the cache remote can split multiple requests to the + cloud provider for smaller file chunks and combines them together locally + where they can be available almost immediately before the reader usually + needs them. + + This is similar to buffering when media files are played online. Rclone + will stay around the current marker but always try its best to stay ahead + and prepare the data before. + + #### Plex Integration #### + + There is a direct integration with Plex which allows cache to detect during reading + if the file is in playback or not. This helps cache to adapt how it queries + the cloud provider depending on what is needed for. + + Scans will have a minimum amount of workers (1) while in a confirmed playback cache + will deploy the configured number of workers. + + This integration opens the doorway to additional performance improvements + which will be explored in the near future. + + **Note:** If Plex options are not configured, `cache` will function with its + configured options without adapting any of its settings. + + How to enable? Run `rclone config` and add all the Plex options (endpoint, username + and password) in your remote and it will be automatically enabled. + + Affected settings: + - `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times + + ##### Certificate Validation ##### + + When the Plex server is configured to only accept secure connections, it is + possible to use `.plex.direct` URLs to ensure certificate validation succeeds. + These URLs are used by Plex internally to connect to the Plex server securely. + + The format for these URLs is the following: + + `https://ip-with-dots-replaced.server-hash.plex.direct:32400/` + + The `ip-with-dots-replaced` part can be any IPv4 address, where the dots + have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. + + To get the `server-hash` part, the easiest way is to visit + + https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token + + This page will list all the available Plex servers for your account + with at least one `.plex.direct` link for each. Copy one URL and replace + the IP address with the desired address. This can be used as the + `plex_url` value. + + ### Known issues ### + + #### Mount and --dir-cache-time #### + + --dir-cache-time controls the first layer of directory caching which works at the mount layer. + Being an independent caching mechanism from the `cache` backend, it will manage its own entries + based on the configured time. + + To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct + one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are + already configured in this way. + + #### Windows support - Experimental #### + + There are a couple of issues with Windows `mount` functionality that still require some investigations. + It should be considered as experimental thus far as fixes come in for this OS. + + Most of the issues seem to be related to the difference between filesystems + on Linux flavors and Windows as cache is heavily dependent on them. + + Any reports or feedback on how cache behaves on this OS is greatly appreciated. + + - https://github.com/rclone/rclone/issues/1935 + - https://github.com/rclone/rclone/issues/1907 + - https://github.com/rclone/rclone/issues/1834 + + #### Risk of throttling #### + + Future iterations of the cache backend will make use of the pooling functionality + of the cloud provider to synchronize and at the same time make writing through it + more tolerant to failures. + + There are a couple of enhancements in track to add these but in the meantime + there is a valid concern that the expiring cache listings can lead to cloud provider + throttles or bans due to repeated queries on it for very large mounts. + + Some recommendations: + - don't use a very small interval for entry information (`--cache-info-age`) + - while writes aren't yet optimised, you can still write through `cache` which gives you the advantage + of adding the file in the cache at the same time if configured to do so. + + Future enhancements: + + - https://github.com/rclone/rclone/issues/1937 + - https://github.com/rclone/rclone/issues/1936 + + #### cache and crypt #### + + One common scenario is to keep your data encrypted in the cloud provider + using the `crypt` remote. `crypt` uses a similar technique to wrap around + an existing remote and handles this translation in a seamless way. + + There is an issue with wrapping the remotes in this order: + **cloud remote** -> **crypt** -> **cache** + + During testing, I experienced a lot of bans with the remotes in this order. + I suspect it might be related to how crypt opens files on the cloud provider + which makes it think we're downloading the full file instead of small chunks. + Organizing the remotes in this order yields better results: + **cloud remote** -> **cache** -> **crypt** + + #### absolute remote paths #### + + `cache` can not differentiate between relative and absolute paths for the wrapped remote. + Any path given in the `remote` config setting and on the command line will be passed to + the wrapped remote as is, but for storing the chunks on disk the path will be made + relative by removing any leading `/` character. + + This behavior is irrelevant for most backend types, but there are backends where a leading `/` + changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are + relative to the root of the SSH server and paths without are relative to the user home directory. + As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent + a different directory on the SSH server. + + ### Cache and Remote Control (--rc) ### + Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points: + By default, the listener is disabled if you do not add the flag. + + ### rc cache/expire + Purge a remote from the cache backend. Supports either a directory or a file. + It supports both encrypted and unencrypted file names if cache is wrapped by crypt. + + Params: + - **remote** = path to remote **(required)** + - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ + + + ### Standard options + + Here are the Standard options specific to cache (Cache a remote). + + #### --cache-remote + + Remote to cache. + + Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", + "myremote:bucket" or maybe "myremote:" (not recommended). + + Properties: + + - Config: remote + - Env Var: RCLONE_CACHE_REMOTE + - Type: string + - Required: true + + #### --cache-plex-url + + The URL of the Plex server. + + Properties: + + - Config: plex_url + - Env Var: RCLONE_CACHE_PLEX_URL + - Type: string + - Required: false + + #### --cache-plex-username + + The username of the Plex user. + + Properties: + + - Config: plex_username + - Env Var: RCLONE_CACHE_PLEX_USERNAME + - Type: string + - Required: false + + #### --cache-plex-password + + The password of the Plex user. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: plex_password + - Env Var: RCLONE_CACHE_PLEX_PASSWORD + - Type: string + - Required: false + + #### --cache-chunk-size + + The size of a chunk (partial file data). + + Use lower numbers for slower connections. If the chunk size is + changed, any downloaded chunks will be invalid and cache-chunk-path + will need to be cleared or unexpected EOF errors will occur. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_CACHE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 5Mi + - Examples: + - "1M" + - 1 MiB + - "5M" + - 5 MiB + - "10M" + - 10 MiB + + #### --cache-info-age + + How long to cache file structure information (directory listings, file size, times, etc.). + If all write operations are done through the cache then you can safely make + this value very large as the cache store will also be updated in real time. + + Properties: + + - Config: info_age + - Env Var: RCLONE_CACHE_INFO_AGE + - Type: Duration + - Default: 6h0m0s + - Examples: + - "1h" + - 1 hour + - "24h" + - 24 hours + - "48h" + - 48 hours + + #### --cache-chunk-total-size + + The total size that the chunks can take up on the local disk. + + If the cache exceeds this value then it will start to delete the + oldest chunks until it goes under this value. + + Properties: + + - Config: chunk_total_size + - Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE + - Type: SizeSuffix + - Default: 10Gi + - Examples: + - "500M" + - 500 MiB + - "1G" + - 1 GiB + - "10G" + - 10 GiB + + ### Advanced options + + Here are the Advanced options specific to cache (Cache a remote). + + #### --cache-plex-token + + The plex token for authentication - auto set normally. + + Properties: + + - Config: plex_token + - Env Var: RCLONE_CACHE_PLEX_TOKEN + - Type: string + - Required: false + + #### --cache-plex-insecure + + Skip all certificate verification when connecting to the Plex server. + + Properties: + + - Config: plex_insecure + - Env Var: RCLONE_CACHE_PLEX_INSECURE + - Type: string + - Required: false + + #### --cache-db-path + + Directory to store file structure metadata DB. + + The remote name is used as the DB file name. + + Properties: + + - Config: db_path + - Env Var: RCLONE_CACHE_DB_PATH + - Type: string + - Default: "$HOME/.cache/rclone/cache-backend" + + #### --cache-chunk-path + + Directory to cache chunk files. + + Path to where partial file data (chunks) are stored locally. The remote + name is appended to the final path. + + This config follows the "--cache-db-path". If you specify a custom + location for "--cache-db-path" and don't specify one for "--cache-chunk-path" + then "--cache-chunk-path" will use the same path as "--cache-db-path". + + Properties: + + - Config: chunk_path + - Env Var: RCLONE_CACHE_CHUNK_PATH + - Type: string + - Default: "$HOME/.cache/rclone/cache-backend" + + #### --cache-db-purge + + Clear all the cached data for this remote on start. + + Properties: + + - Config: db_purge + - Env Var: RCLONE_CACHE_DB_PURGE + - Type: bool + - Default: false + + #### --cache-chunk-clean-interval + + How often should the cache perform cleanups of the chunk storage. + + The default value should be ok for most people. If you find that the + cache goes over "cache-chunk-total-size" too often then try to lower + this value to force it to perform cleanups more often. + + Properties: + + - Config: chunk_clean_interval + - Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL + - Type: Duration + - Default: 1m0s + + #### --cache-read-retries + + How many times to retry a read from a cache storage. + + Since reading from a cache stream is independent from downloading file + data, readers can get to a point where there's no more data in the + cache. Most of the times this can indicate a connectivity issue if + cache isn't able to provide file data anymore. + + For really slow connections, increase this to a point where the stream is + able to provide data but your experience will be very stuttering. + + Properties: + + - Config: read_retries + - Env Var: RCLONE_CACHE_READ_RETRIES + - Type: int + - Default: 10 + + #### --cache-workers + + How many workers should run in parallel to download chunks. + + Higher values will mean more parallel processing (better CPU needed) + and more concurrent requests on the cloud provider. This impacts + several aspects like the cloud provider API limits, more stress on the + hardware that rclone runs on but it also means that streams will be + more fluid and data will be available much more faster to readers. + + **Note**: If the optional Plex integration is enabled then this + setting will adapt to the type of reading performed and the value + specified here will be used as a maximum number of workers to use. + + Properties: + + - Config: workers + - Env Var: RCLONE_CACHE_WORKERS + - Type: int + - Default: 4 + + #### --cache-chunk-no-memory + + Disable the in-memory cache for storing chunks during streaming. + + By default, cache will keep file data during streaming in RAM as well + to provide it to readers as fast as possible. + + This transient data is evicted as soon as it is read and the number of + chunks stored doesn't exceed the number of workers. However, depending + on other settings like "cache-chunk-size" and "cache-workers" this footprint + can increase if there are parallel streams too (multiple files being read + at the same time). + + If the hardware permits it, use this feature to provide an overall better + performance during streaming but it can also be disabled if RAM is not + available on the local machine. + + Properties: + + - Config: chunk_no_memory + - Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY + - Type: bool + - Default: false + + #### --cache-rps + + Limits the number of requests per second to the source FS (-1 to disable). + + This setting places a hard limit on the number of requests per second + that cache will be doing to the cloud provider remote and try to + respect that value by setting waits between reads. + + If you find that you're getting banned or limited on the cloud + provider through cache and know that a smaller number of requests per + second will allow you to work with it then you can use this setting + for that. + + A good balance of all the other settings should make this setting + useless but it is available to set for more special cases. + + **NOTE**: This will limit the number of requests during streams but + other API calls to the cloud provider like directory listings will + still pass. + + Properties: + + - Config: rps + - Env Var: RCLONE_CACHE_RPS + - Type: int + - Default: -1 + + #### --cache-writes + + Cache file data on writes through the FS. + + If you need to read files immediately after you upload them through + cache you can enable this flag to have their data stored in the + cache store at the same time during upload. + + Properties: + + - Config: writes + - Env Var: RCLONE_CACHE_WRITES + - Type: bool + - Default: false + + #### --cache-tmp-upload-path + + Directory to keep temporary files until they are uploaded. + + This is the path where cache will use as a temporary storage for new + files that need to be uploaded to the cloud provider. + + Specifying a value will enable this feature. Without it, it is + completely disabled and files will be uploaded directly to the cloud + provider + + Properties: + + - Config: tmp_upload_path + - Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH + - Type: string + - Required: false + + #### --cache-tmp-wait-time + + How long should files be stored in local cache before being uploaded. + + This is the duration that a file must wait in the temporary location + _cache-tmp-upload-path_ before it is selected for upload. + + Note that only one file is uploaded at a time and it can take longer + to start the upload if a queue formed for this purpose. + + Properties: + + - Config: tmp_wait_time + - Env Var: RCLONE_CACHE_TMP_WAIT_TIME + - Type: Duration + - Default: 15s + + #### --cache-db-wait-time + + How long to wait for the DB to be available - 0 is unlimited. + + Only one process can have the DB open at any one time, so rclone waits + for this duration for the DB to become available before it gives an + error. + + If you set it to 0 then it will wait forever. + + Properties: + + - Config: db_wait_time + - Env Var: RCLONE_CACHE_DB_WAIT_TIME + - Type: Duration + - Default: 1s + + ## Backend commands + + Here are the commands specific to the cache backend. + + Run them with + + rclone backend COMMAND remote: + + The help below will explain what arguments each command takes. + + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. + + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). + + ### stats + + Print stats on the cache backend in JSON format. + + rclone backend stats remote: [options] [+] + + + + # Chunker + + The `chunker` overlay transparently splits large files into smaller chunks + during upload to wrapped remote and transparently assembles them back + when the file is downloaded. This allows to effectively overcome size limits + imposed by storage providers. + + ## Configuration + + To use it, first set up the underlying remote following the configuration + instructions for that remote. You can also use a local pathname instead of + a remote. + + First check your chosen remote is working - we'll call it `remote:path` here. + Note that anything inside `remote:path` will be chunked and anything outside + won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift) + then you should probably put the bucket in the remote `s3:bucket`. + + Now configure `chunker` using `rclone config`. We will call this one `overlay` + to separate it from the `remote` itself. + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> overlay Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Transparently chunk/split large files  "chunker" [snip] Storage> +chunker Remote to chunk/unchunk. Normally should contain a ':' and a +path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe +"myremote:" (not recommended). Enter a string value. Press Enter for the +default (""). remote> remote:path Files larger than chunk size will be +split in chunks. Enter a size with suffix K,M,G,T. Press Enter for the +default ("2G"). chunk_size> 100M Choose how chunker handles hash sums. +All modes but "none" require metadata. Enter a string value. Press Enter +for the default ("md5"). Choose a number from below, or type in your own +value 1 / Pass any hash supported by wrapped remote for non-chunked +files, return nothing otherwise  "none" 2 / MD5 for composite files + "md5" 3 / SHA1 for composite files  "sha1" 4 / MD5 for all files + "md5all" 5 / SHA1 for all files  "sha1all" 6 / Copying a file to +chunker will request MD5 from the source falling back to SHA1 if +unsupported  "md5quick" 7 / Similar to "md5quick" but prefers SHA1 over +MD5  "sha1quick" hash_type> md5 Edit advanced config? (y/n) y) Yes n) No +y/n> n Remote config -------------------- [overlay] type = chunker +remote = remote:bucket chunk_size = 100M hash_type = md5 +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + ### Specifying the remote + + In normal use, make sure the remote has a `:` in. If you specify the remote + without a `:` then rclone will use a local directory of that name. + So if you use a remote of `/path/to/secret/files` then rclone will + chunk stuff in that directory. If you use a remote of `name` then rclone + will put files in a directory called `name` in the current directory. + + + ### Chunking + + When rclone starts a file upload, chunker checks the file size. If it + doesn't exceed the configured chunk size, chunker will just pass the file + to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut + data in pieces with temporary names and stream them one by one, on the fly. + Each data chunk will contain the specified number of bytes, except for the + last one which may have less data. If file size is unknown in advance + (this is called a streaming upload), chunker will internally create + a temporary copy, record its size and repeat the above process. + + When upload completes, temporary chunk files are finally renamed. + This scheme guarantees that operations can be run in parallel and look + from outside as atomic. + A similar method with hidden temporary chunks is used for other operations + (copy/move/rename, etc.). If an operation fails, hidden chunks are normally + destroyed, and the target composite file stays intact. + + When a composite file download is requested, chunker transparently + assembles it by concatenating data chunks in order. As the split is trivial + one could even manually concatenate data chunks together to obtain the + original content. + + When the `list` rclone command scans a directory on wrapped remote, + the potential chunk files are accounted for, grouped and assembled into + composite directory entries. Any temporary chunks are hidden. + + List and other commands can sometimes come across composite files with + missing or invalid chunks, e.g. shadowed by like-named directory or + another file. This usually means that wrapped file system has been directly + tampered with or damaged. If chunker detects a missing chunk it will + by default print warning, skip the whole incomplete group of chunks but + proceed with current command. + You can set the `--chunker-fail-hard` flag to have commands abort with + error message in such cases. + + **Caveat**: As it is now, chunker will always create a temporary file in the + backend and then rename it, even if the file is below the chunk threshold. + This will result in unnecessary API calls and can severely restrict throughput + when handling transfers primarily composed of small files on some backends (e.g. Box). + A workaround to this issue is to use chunker only for files above the chunk threshold + via `--min-size` and then perform a separate call without chunker on the remaining + files. + + + #### Chunk names + + The default chunk name format is `*.rclone_chunk.###`, hence by default + chunk names are `BIG_FILE_NAME.rclone_chunk.001`, + `BIG_FILE_NAME.rclone_chunk.002` etc. You can configure another name format + using the `name_format` configuration file option. The format uses asterisk + `*` as a placeholder for the base file name and one or more consecutive + hash characters `#` as a placeholder for sequential chunk number. + There must be one and only one asterisk. The number of consecutive hash + characters defines the minimum length of a string representing a chunk number. + If decimal chunk number has less digits than the number of hashes, it is + left-padded by zeros. If the decimal string is longer, it is left intact. + By default numbering starts from 1 but there is another option that allows + user to start from 0, e.g. for compatibility with legacy software. + + For example, if name format is `big_*-##.part` and original file name is + `data.txt` and numbering starts from 0, then the first chunk will be named + `big_data.txt-00.part`, the 99th chunk will be `big_data.txt-98.part` + and the 302nd chunk will become `big_data.txt-301.part`. + + Note that `list` assembles composite directory entries only when chunk names + match the configured format and treats non-conforming file names as normal + non-chunked files. + + When using `norename` transactions, chunk names will additionally have a unique + file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`. + + + ### Metadata + + Besides data chunks chunker will by default create metadata object for + a composite file. The object is named after the original file. + Chunker allows user to disable metadata completely (the `none` format). + Note that metadata is normally not created for files smaller than the + configured chunk size. This may change in future rclone releases. + + #### Simple JSON metadata format + + This is the default format. It supports hash sums and chunk validation + for composite files. Meta objects carry the following fields: + + - `ver` - version of format, currently `1` + - `size` - total size of composite file + - `nchunks` - number of data chunks in file + - `md5` - MD5 hashsum of composite file (if present) + - `sha1` - SHA1 hashsum (if present) + - `txn` - identifies current version of the file + + There is no field for composite file name as it's simply equal to the name + of meta object on the wrapped remote. Please refer to respective sections + for details on hashsums and modified time handling. + + #### No metadata + + You can disable meta objects by setting the meta format option to `none`. + In this mode chunker will scan directory for all files that follow + configured chunk name format, group them by detecting chunks with the same + base name and show group names as virtual composite files. + This method is more prone to missing chunk errors (especially missing + last chunk) than format with metadata enabled. + + + ### Hashsums + + Chunker supports hashsums only when a compatible metadata is present. + Hence, if you choose metadata format of `none`, chunker will report hashsum + as `UNSUPPORTED`. + + Please note that by default metadata is stored only for composite files. + If a file is smaller than configured chunk size, chunker will transparently + redirect hash requests to wrapped remote, so support depends on that. + You will see the empty string as a hashsum of requested type for small + files if the wrapped remote doesn't support it. + + Many storage backends support MD5 and SHA1 hash types, so does chunker. + With chunker you can choose one or another but not both. + MD5 is set by default as the most supported type. + Since chunker keeps hashes for composite files and falls back to the + wrapped remote hash for non-chunked ones, we advise you to choose the same + hash type as supported by wrapped remote so that your file listings + look coherent. + + If your storage backend does not support MD5 or SHA1 but you need consistent + file hashing, configure chunker with `md5all` or `sha1all`. These two modes + guarantee given hash for all files. If wrapped remote doesn't support it, + chunker will then add metadata to all files, even small. However, this can + double the amount of small files in storage and incur additional service charges. + You can even use chunker to force md5/sha1 support in any other remote + at expense of sidecar meta objects by setting e.g. `hash_type=sha1all` + to force hashsums and `chunk_size=1P` to effectively disable chunking. + + Normally, when a file is copied to chunker controlled remote, chunker + will ask the file source for compatible file hash and revert to on-the-fly + calculation if none is found. This involves some CPU overhead but provides + a guarantee that given hashsum is available. Also, chunker will reject + a server-side copy or move operation if source and destination hashsum + types are different resulting in the extra network bandwidth, too. + In some rare cases this may be undesired, so chunker provides two optional + choices: `sha1quick` and `md5quick`. If the source does not support primary + hash type and the quick mode is enabled, chunker will try to fall back to + the secondary type. This will save CPU and bandwidth but can result in empty + hashsums at destination. Beware of consequences: the `sync` command will + revert (sometimes silently) to time/size comparison if compatible hashsums + between source and target are not found. + + + ### Modified time + + Chunker stores modification times using the wrapped remote so support + depends on that. For a small non-chunked file the chunker overlay simply + manipulates modification time of the wrapped remote file. + For a composite file with metadata chunker will get and set + modification time of the metadata object on the wrapped remote. + If file is chunked but metadata format is `none` then chunker will + use modification time of the first data chunk. + + + ### Migrations + + The idiomatic way to migrate to a different chunk size, hash type, transaction + style or chunk naming scheme is to: + + - Collect all your chunked files under a directory and have your + chunker remote point to it. + - Create another directory (most probably on the same cloud storage) + and configure a new remote with desired metadata format, + hash type, chunk naming etc. + - Now run `rclone sync --interactive oldchunks: newchunks:` and all your data + will be transparently converted in transfer. + This may take some time, yet chunker will try server-side + copy if possible. + - After checking data integrity you may remove configuration section + of the old remote. + + If rclone gets killed during a long operation on a big composite file, + hidden temporary chunks may stay in the directory. They will not be + shown by the `list` command but will eat up your account quota. + Please note that the `deletefile` command deletes only active + chunks of a file. As a workaround, you can use remote of the wrapped + file system to see them. + An easy way to get rid of hidden garbage is to copy littered directory + somewhere using the chunker remote and purge the original directory. + The `copy` command will copy only active chunks while the `purge` will + remove everything including garbage. + + + ### Caveats and Limitations + + Chunker requires wrapped remote to support server-side `move` (or `copy` + + `delete`) operations, otherwise it will explicitly refuse to start. + This is because it internally renames temporary chunk files to their final + names when an operation completes successfully. + + Chunker encodes chunk number in file name, so with default `name_format` + setting it adds 17 characters. Also chunker adds 7 characters of temporary + suffix during operations. Many file systems limit base file name without path + by 255 characters. Using rclone's crypt remote as a base file system limits + file name by 143 characters. Thus, maximum name length is 231 for most files + and 119 for chunker-over-crypt. A user in need can change name format to + e.g. `*.rcc##` and save 10 characters (provided at most 99 chunks per file). + + Note that a move implemented using the copy-and-delete method may incur + double charging with some cloud storage providers. + + Chunker will not automatically rename existing chunks when you run + `rclone config` on a live remote and change the chunk name format. + Beware that in result of this some files which have been treated as chunks + before the change can pop up in directory listings as normal files + and vice versa. The same warning holds for the chunk size. + If you desperately need to change critical chunking settings, you should + run data migration as described above. + + If wrapped remote is case insensitive, the chunker overlay will inherit + that property (so you can't have a file called "Hello.doc" and "hello.doc" + in the same directory). + + Chunker included in rclone releases up to `v1.54` can sometimes fail to + detect metadata produced by recent versions of rclone. We recommend users + to keep rclone up-to-date to avoid data corruption. + + Changing `transactions` is dangerous and requires explicit migration. + + + ### Standard options + + Here are the Standard options specific to chunker (Transparently chunk/split large files). + + #### --chunker-remote + + Remote to chunk/unchunk. + + Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", + "myremote:bucket" or maybe "myremote:" (not recommended). + + Properties: + + - Config: remote + - Env Var: RCLONE_CHUNKER_REMOTE + - Type: string + - Required: true + + #### --chunker-chunk-size + + Files larger than chunk size will be split in chunks. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_CHUNKER_CHUNK_SIZE + - Type: SizeSuffix + - Default: 2Gi + + #### --chunker-hash-type + + Choose how chunker handles hash sums. + + All modes but "none" require metadata. + + Properties: + + - Config: hash_type + - Env Var: RCLONE_CHUNKER_HASH_TYPE + - Type: string + - Default: "md5" + - Examples: + - "none" + - Pass any hash supported by wrapped remote for non-chunked files. + - Return nothing otherwise. + - "md5" + - MD5 for composite files. + - "sha1" + - SHA1 for composite files. + - "md5all" + - MD5 for all files. + - "sha1all" + - SHA1 for all files. + - "md5quick" + - Copying a file to chunker will request MD5 from the source. + - Falling back to SHA1 if unsupported. + - "sha1quick" + - Similar to "md5quick" but prefers SHA1 over MD5. + + ### Advanced options + + Here are the Advanced options specific to chunker (Transparently chunk/split large files). + + #### --chunker-name-format + + String format of chunk file names. + + The two placeholders are: base file name (*) and chunk number (#...). + There must be one and only one asterisk and one or more consecutive hash characters. + If chunk number has less digits than the number of hashes, it is left-padded by zeros. + If there are more digits in the number, they are left as is. + Possible chunk files are ignored if their name does not match given format. + + Properties: + + - Config: name_format + - Env Var: RCLONE_CHUNKER_NAME_FORMAT + - Type: string + - Default: "*.rclone_chunk.###" + + #### --chunker-start-from + + Minimum valid chunk number. Usually 0 or 1. + + By default chunk numbers start from 1. + + Properties: + + - Config: start_from + - Env Var: RCLONE_CHUNKER_START_FROM + - Type: int + - Default: 1 + + #### --chunker-meta-format + + Format of the metadata object or "none". + + By default "simplejson". + Metadata is a small JSON file named after the composite file. + + Properties: + + - Config: meta_format + - Env Var: RCLONE_CHUNKER_META_FORMAT + - Type: string + - Default: "simplejson" + - Examples: + - "none" + - Do not use metadata files at all. + - Requires hash type "none". + - "simplejson" + - Simple JSON supports hash sums and chunk validation. + - + - It has the following fields: ver, size, nchunks, md5, sha1. + + #### --chunker-fail-hard + + Choose how chunker should handle files with missing or invalid chunks. + + Properties: + + - Config: fail_hard + - Env Var: RCLONE_CHUNKER_FAIL_HARD + - Type: bool + - Default: false + - Examples: + - "true" + - Report errors and abort current command. + - "false" + - Warn user, skip incomplete file and proceed. + + #### --chunker-transactions + + Choose how chunker should handle temporary files during transactions. + + Properties: + + - Config: transactions + - Env Var: RCLONE_CHUNKER_TRANSACTIONS + - Type: string + - Default: "rename" + - Examples: + - "rename" + - Rename temporary files after a successful transaction. + - "norename" + - Leave temporary file names and write transaction ID to metadata file. + - Metadata is required for no rename transactions (meta format cannot be "none"). + - If you are using norename transactions you should be careful not to downgrade Rclone + - as older versions of Rclone don't support this transaction style and will misinterpret + - files manipulated by norename transactions. + - This method is EXPERIMENTAL, don't use on production systems. + - "auto" + - Rename or norename will be used depending on capabilities of the backend. + - If meta format is set to "none", rename transactions will always be used. + - This method is EXPERIMENTAL, don't use on production systems. + + + + # Citrix ShareFile + + [Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business. + + ## Configuration + + The initial setup for Citrix ShareFile involves getting a token from + Citrix ShareFile which you can in your browser. `rclone config` walks you + through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value XX / Citrix +Sharefile  "sharefile" Storage> sharefile ** See help for sharefile +backend at: https://rclone.org/sharefile/ ** + +ID of the root folder + +Leave blank to access "Personal Folders". You can use one of the +standard values here or any folder ID (long hex number ID). Enter a +string value. Press Enter for the default (""). Choose a number from +below, or type in your own value 1 / Access the Personal Folders. +(Default)  "" 2 / Access the Favorites folder.  "favorites" 3 / Access +all the shared folders.  "allshared" 4 / Access all the individual +connectors.  "connectors" 5 / Access the home, favorites, and shared +folders as well as the connectors.  "top" root_folder_id> Edit advanced +config? (y/n) y) Yes n) No y/n> n Remote config Use web browser to +automatically authenticate rclone with remote? * Say Y if the machine +running rclone has a web browser you can use * Say N if running rclone +on a (remote) machine without web browser access If not sure try Y. If Y +failed, try N. y) Yes n) No y/n> y If your browser doesn't open +automatically go to the following link: +http://127.0.0.1:53682/auth?state=XXX Log in and authorize rclone for +access Waiting for code... Got code -------------------- [remote] type = +sharefile endpoint = https://XXX.sharefile.com token = +{"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. + + Note that rclone runs a webserver on your local machine to collect the + token as returned from Citrix ShareFile. This only runs from the moment it opens + your browser to the moment you get back the verification code. This + is on `http://127.0.0.1:53682/` and this it may require you to unblock + it temporarily if you are running a host firewall. + + Once configured you can then use `rclone` like this, + + List directories in top level of your ShareFile + + rclone lsd remote: + + List all the files in your ShareFile + + rclone ls remote: + + To copy a local directory to an ShareFile directory called backup + + rclone copy /home/source remote:backup + + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + + ### Modified time and hashes + + ShareFile allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. + + ShareFile supports MD5 type hashes, so you can use the `--checksum` + flag. + + ### Transfers + + For files above 128 MiB rclone will use a chunked transfer. Rclone will + upload up to `--transfers` chunks at the same time (shared among all + the multipart uploads). Chunks are buffered in memory and are + normally 64 MiB so increasing `--transfers` will increase memory use. + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \\ | 0x5C | \ | + | * | 0x2A | * | + | < | 0x3C | < | + | > | 0x3E | > | + | ? | 0x3F | ? | + | : | 0x3A | : | + | \| | 0x7C | | | + | " | 0x22 | " | + + File names can also not start or end with the following characters. + These only get replaced if they are the first or last character in the + name: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + | . | 0x2E | . | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + + ### Standard options + + Here are the Standard options specific to sharefile (Citrix Sharefile). + + #### --sharefile-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_SHAREFILE_CLIENT_ID + - Type: string + - Required: false + + #### --sharefile-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_SHAREFILE_CLIENT_SECRET + - Type: string + - Required: false + + #### --sharefile-root-folder-id + + ID of the root folder. Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Access the Personal Folders. (Default) - \ "" - 2 / Access the Favorites folder. - \ "favorites" - 3 / Access all the shared folders. - \ "allshared" - 4 / Access all the individual connectors. - \ "connectors" - 5 / Access the home, favorites, and shared folders as well as the connectors. - \ "top" - root_folder_id> - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - type = sharefile - endpoint = https://XXX.sharefile.com - token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2019-09-30T19:41:45.878561877+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Citrix ShareFile. This only runs from the moment -it opens your browser to the moment you get back the verification code. -This is on http://127.0.0.1:53682/ and this it may require you to -unblock it temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your ShareFile - - rclone lsd remote: - -List all the files in your ShareFile - - rclone ls remote: - -To copy a local directory to an ShareFile directory called backup - - rclone copy /home/source remote:backup - -Paths may be as deep as required, e.g. remote:directory/subdirectory. - -Modified time and hashes -ShareFile allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. + Properties: -ShareFile supports MD5 type hashes, so you can use the --checksum flag. + - Config: root_folder_id + - Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID + - Type: string + - Required: false + - Examples: + - "" + - Access the Personal Folders (default). + - "favorites" + - Access the Favorites folder. + - "allshared" + - Access all the shared folders. + - "connectors" + - Access all the individual connectors. + - "top" + - Access the home, favorites, and shared folders as well as the connectors. -Transfers + ### Advanced options -For files above 128 MiB rclone will use a chunked transfer. Rclone will -upload up to --transfers chunks at the same time (shared among all the -multipart uploads). Chunks are buffered in memory and are normally 64 -MiB so increasing --transfers will increase memory use. + Here are the Advanced options specific to sharefile (Citrix Sharefile). -Restricted filename characters + #### --sharefile-token -In addition to the default restricted characters set the following -characters are also replaced: + OAuth Access Token as a JSON blob. - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - * 0x2A * - < 0x3C < - > 0x3E > - ? 0x3F ? - : 0x3A : - | 0x7C | - " 0x22 " + Properties: -File names can also not start or end with the following characters. -These only get replaced if they are the first or last character in the -name: + - Config: token + - Env Var: RCLONE_SHAREFILE_TOKEN + - Type: string + - Required: false - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ - . 0x2E . + #### --sharefile-auth-url -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + Auth server URL. -Standard options + Leave blank to use the provider defaults. -Here are the Standard options specific to sharefile (Citrix Sharefile). + Properties: ---sharefile-root-folder-id + - Config: auth_url + - Env Var: RCLONE_SHAREFILE_AUTH_URL + - Type: string + - Required: false -ID of the root folder. + #### --sharefile-token-url -Leave blank to access "Personal Folders". You can use one of the -standard values here or any folder ID (long hex number ID). + Token server url. -Properties: + Leave blank to use the provider defaults. -- Config: root_folder_id -- Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID -- Type: string -- Required: false -- Examples: - - "" - - Access the Personal Folders (default). - - "favorites" - - Access the Favorites folder. - - "allshared" - - Access all the shared folders. - - "connectors" - - Access all the individual connectors. - - "top" - - Access the home, favorites, and shared folders as well as - the connectors. + Properties: -Advanced options + - Config: token_url + - Env Var: RCLONE_SHAREFILE_TOKEN_URL + - Type: string + - Required: false -Here are the Advanced options specific to sharefile (Citrix Sharefile). + #### --sharefile-upload-cutoff ---sharefile-upload-cutoff + Cutoff for switching to multipart upload. -Cutoff for switching to multipart upload. + Properties: -Properties: + - Config: upload_cutoff + - Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 128Mi -- Config: upload_cutoff -- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 128Mi + #### --sharefile-chunk-size ---sharefile-chunk-size + Upload chunk size. -Upload chunk size. + Must a power of 2 >= 256k. -Must a power of 2 >= 256k. + Making this larger will improve performance, but note that each chunk + is buffered in memory one per transfer. -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. + Reducing this will reduce memory usage but decrease performance. -Reducing this will reduce memory usage but decrease performance. + Properties: -Properties: + - Config: chunk_size + - Env Var: RCLONE_SHAREFILE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 64Mi -- Config: chunk_size -- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 64Mi + #### --sharefile-endpoint ---sharefile-endpoint + Endpoint for API calls. -Endpoint for API calls. + This is usually auto discovered as part of the oauth process, but can + be set manually to something like: https://XXX.sharefile.com -This is usually auto discovered as part of the oauth process, but can be -set manually to something like: https://XXX.sharefile.com -Properties: + Properties: -- Config: endpoint -- Env Var: RCLONE_SHAREFILE_ENDPOINT -- Type: string -- Required: false + - Config: endpoint + - Env Var: RCLONE_SHAREFILE_ENDPOINT + - Type: string + - Required: false ---sharefile-encoding + #### --sharefile-encoding -The encoding for the backend. + The encoding for the backend. -See the encoding section in the overview for more info. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Properties: + Properties: -- Config: encoding -- Env Var: RCLONE_SHAREFILE_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot + - Config: encoding + - Env Var: RCLONE_SHAREFILE_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot -Limitations -Note that ShareFile is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". + ## Limitations -ShareFile only supports filenames up to 256 characters in length. - -rclone about is not supported by the Citrix ShareFile backend. Backends -without this capability cannot determine free space for an rclone mount -or use policy mfs (most free space) as a member of an rclone union -remote. - -See List of backends that do not support rclone about and rclone about - -Crypt - -Rclone crypt remotes encrypt and decrypt other remotes. - -A remote of type crypt does not access a storage system directly, but -instead wraps another remote, which in turn accesses the storage system. -This is similar to how alias, union, chunker and a few others work. It -makes the usage very flexible, as you can add a layer, in this case an -encryption layer, on top of any other backend, even in multiple layers. -Rclone's functionality can be used as with any other remote, for example -you can mount a crypt remote. - -Accessing a storage system through a crypt remote realizes client-side -encryption, which makes it safe to keep your data in a location you do -not trust will not get compromised. When working against the crypt -remote, rclone will automatically encrypt (before uploading) and decrypt -(after downloading) on your local system as needed on the fly, leaving -the data encrypted at rest in the wrapped remote. If you access the -storage system using an application other than rclone, or access the -wrapped remote directly using rclone, there will not be any -encryption/decryption: Downloading existing content will just give you -the encrypted (scrambled) format, and anything you upload will not -become encrypted. + Note that ShareFile is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". -The encryption is a secret-key encryption (also called symmetric key -encryption) algorithm, where a password (or pass phrase) is used to -generate real encryption key. The password can be supplied by user, or -you may chose to let rclone generate one. It will be stored in the -configuration file, in a lightly obscured form. If you are in an -environment where you are not able to keep your configuration secured, -you should add configuration encryption as protection. As long as you -have this configuration file, you will be able to decrypt your data. -Without the configuration file, as long as you remember the password (or -keep it in a safe place), you can re-create the configuration and gain -access to the existing data. You may also configure a corresponding -remote in a different installation to access the same data. See below -for guidance to changing password. - -Encryption uses cryptographic salt, to permute the encryption key so -that the same string may be encrypted in different ways. When -configuring the crypt remote it is optional to enter a salt, or to let -rclone generate a unique salt. If omitted, rclone uses a built-in unique -string. Normally in cryptography, the salt is stored together with the -encrypted content, and do not have to be memorized by the user. This is -not the case in rclone, because rclone does not store any additional -information on the remotes. Use of custom salt is effectively a second -password that must be memorized. - -File content encryption is performed using NaCl SecretBox, based on -XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory -names) are also encrypted by default, but this has some implications and -is therefore possible to be turned off. - -Configuration - -Here is an example of how to make a remote called secret. - -To use crypt, first set up the underlying remote. Follow the -rclone config instructions for the specific backend. - -Before configuring the crypt remote, check the underlying remote is -working. In this example the underlying remote is called remote. We will -configure a path path within this remote to contain the encrypted -content. Anything inside remote:path will be encrypted and anything -outside will not. - -Configure crypt using rclone config. In this example the crypt remote is -called secret, to differentiate it from the underlying remote. - -When you are done you can use the crypt remote named secret just as you -would with any other remote, e.g. rclone copy D:\docs secret:\docs, and -rclone will encrypt and decrypt as needed on the fly. If you access the -wrapped remote remote:path directly you will bypass the encryption, and -anything you read will be in encrypted form, and anything you write will -be unencrypted. To avoid issues it is best to configure a dedicated path -for encrypted content, and access it exclusively through a crypt remote. - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> secret - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Encrypt/Decrypt a remote - \ "crypt" - [snip] - Storage> crypt - ** See help for crypt backend at: https://rclone.org/crypt/ ** + ShareFile only supports filenames up to 256 characters in length. + + `rclone about` is not supported by the Citrix ShareFile backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. + + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + # Crypt + + Rclone `crypt` remotes encrypt and decrypt other remotes. + + A remote of type `crypt` does not access a [storage system](https://rclone.org/overview/) + directly, but instead wraps another remote, which in turn accesses + the storage system. This is similar to how [alias](https://rclone.org/alias/), + [union](https://rclone.org/union/), [chunker](https://rclone.org/chunker/) + and a few others work. It makes the usage very flexible, as you can + add a layer, in this case an encryption layer, on top of any other + backend, even in multiple layers. Rclone's functionality + can be used as with any other remote, for example you can + [mount](https://rclone.org/commands/rclone_mount/) a crypt remote. + + Accessing a storage system through a crypt remote realizes client-side + encryption, which makes it safe to keep your data in a location you do + not trust will not get compromised. + When working against the `crypt` remote, rclone will automatically + encrypt (before uploading) and decrypt (after downloading) on your local + system as needed on the fly, leaving the data encrypted at rest in the + wrapped remote. If you access the storage system using an application + other than rclone, or access the wrapped remote directly using rclone, + there will not be any encryption/decryption: Downloading existing content + will just give you the encrypted (scrambled) format, and anything you + upload will *not* become encrypted. + + The encryption is a secret-key encryption (also called symmetric key encryption) + algorithm, where a password (or pass phrase) is used to generate real encryption key. + The password can be supplied by user, or you may chose to let rclone + generate one. It will be stored in the configuration file, in a lightly obscured form. + If you are in an environment where you are not able to keep your configuration + secured, you should add + [configuration encryption](https://rclone.org/docs/#configuration-encryption) + as protection. As long as you have this configuration file, you will be able to + decrypt your data. Without the configuration file, as long as you remember + the password (or keep it in a safe place), you can re-create the configuration + and gain access to the existing data. You may also configure a corresponding + remote in a different installation to access the same data. + See below for guidance to [changing password](#changing-password). + + Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)), + to permute the encryption key so that the same string may be encrypted in + different ways. When configuring the crypt remote it is optional to enter a salt, + or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. + Normally in cryptography, the salt is stored together with the encrypted content, + and do not have to be memorized by the user. This is not the case in rclone, + because rclone does not store any additional information on the remotes. Use of + custom salt is effectively a second password that must be memorized. + + [File content](#file-encryption) encryption is performed using + [NaCl SecretBox](https://godoc.org/golang.org/x/crypto/nacl/secretbox), + based on XSalsa20 cipher and Poly1305 for integrity. + [Names](#name-encryption) (file- and directory names) are also encrypted + by default, but this has some implications and is therefore + possible to be turned off. + + ## Configuration + + Here is an example of how to make a remote called `secret`. + + To use `crypt`, first set up the underlying remote. Follow the + `rclone config` instructions for the specific backend. + + Before configuring the crypt remote, check the underlying remote is + working. In this example the underlying remote is called `remote`. + We will configure a path `path` within this remote to contain the + encrypted content. Anything inside `remote:path` will be encrypted + and anything outside will not. + + Configure `crypt` using `rclone config`. In this example the `crypt` + remote is called `secret`, to differentiate it from the underlying + `remote`. + + When you are done you can use the crypt remote named `secret` just + as you would with any other remote, e.g. `rclone copy D:\docs secret:\docs`, + and rclone will encrypt and decrypt as needed on the fly. + If you access the wrapped remote `remote:path` directly you will bypass + the encryption, and anything you read will be in encrypted form, and + anything you write will be unencrypted. To avoid issues it is best to + configure a dedicated path for encrypted content, and access it + exclusively through a crypt remote. + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> secret Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / +Encrypt/Decrypt a remote  "crypt" [snip] Storage> crypt ** See help for +crypt backend at: https://rclone.org/crypt/ ** + +Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg +"myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not +recommended). Enter a string value. Press Enter for the default (""). +remote> remote:path How to encrypt the filenames. Enter a string value. +Press Enter for the default ("standard"). Choose a number from below, or +type in your own value. / Encrypt the filenames. 1 | See the docs for +the details.  "standard" 2 / Very simple filename obfuscation. + "obfuscate" / Don't encrypt the file names. 3 | Adds a ".bin" extension +only.  "off" filename_encryption> Option to either encrypt directory +names or leave them intact. + +NB If filename_encryption is "off" then this option will do nothing. +Enter a boolean value (true or false). Press Enter for the default +("true"). Choose a number from below, or type in your own value 1 / +Encrypt directory names.  "true" 2 / Don't encrypt directory names, +leave them intact.  "false" directory_name_encryption> Password or pass +phrase for encryption. y) Yes type in my own password g) Generate random +password y/g> y Enter the password: password: Confirm the password: +password: Password or pass phrase for salt. Optional but recommended. +Should be different to the previous password. y) Yes type in my own +password g) Generate random password n) No leave this optional password +blank (default) y/g/n> g Password strength in bits. 64 is just about +memorable 128 is secure 1024 is the maximum Bits> 128 Your password is: +JAsJvRcgR-_veXNfy_sGmQ Use this password? Please note that an obscured +version of this password (and not the password itself) will be stored +under your configuration file, so keep this generated password in a safe +place. y) Yes (default) n) No y/n> Edit advanced config? (y/n) y) Yes n) +No (default) y/n> Remote config -------------------- [secret] type = +crypt remote = remote:path password = *** ENCRYPTED password2 = +ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit +this remote d) Delete this remote y/e/d> + + + **Important** The crypt password stored in `rclone.conf` is lightly + obscured. That only protects it from cursory inspection. It is not + secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of `rclone.conf` is specified. + + A long passphrase is recommended, or `rclone config` can generate a + random one. + + The obscured password is created using AES-CTR with a static key. The + salt is stored verbatim at the beginning of the obscured password. This + static key is shared between all versions of rclone. + + If you reconfigure rclone with the same passwords/passphrases + elsewhere it will be compatible, but the obscured version will be different + due to the different salt. + + Rclone does not encrypt + + * file length - this can be calculated within 16 bytes + * modification time - used for syncing + + ### Specifying the remote + + When configuring the remote to encrypt/decrypt, you may specify any + string that rclone accepts as a source/destination of other commands. + + The primary use case is to specify the path into an already configured + remote (e.g. `remote:path/to/dir` or `remote:bucket`), such that + data in a remote untrusted location can be stored encrypted. + + You may also specify a local filesystem path, such as + `/path/to/dir` on Linux, `C:\path\to\dir` on Windows. By creating + a crypt remote pointing to such a local filesystem path, you can + use rclone as a utility for pure local file encryption, for example + to keep encrypted files on a removable USB drive. + + **Note**: A string which do not contain a `:` will by rclone be treated + as a relative path in the local filesystem. For example, if you enter + the name `remote` without the trailing `:`, it will be treated as + a subdirectory of the current directory with name "remote". + + If a path `remote:path/to/dir` is specified, rclone stores encrypted + files in `path/to/dir` on the remote. With file name encryption, files + saved to `secret:subdir/subfile` are stored in the unencrypted path + `path/to/dir` but the `subdir/subpath` element is encrypted. + + The path you specify does not have to exist, rclone will create + it when needed. + + If you intend to use the wrapped remote both directly for keeping + unencrypted content, as well as through a crypt remote for encrypted + content, it is recommended to point the crypt remote to a separate + directory within the wrapped remote. If you use a bucket-based storage + system (e.g. Swift, S3, Google Compute Storage, B2) it is generally + advisable to wrap the crypt remote around a specific bucket (`s3:bucket`). + If wrapping around the entire root of the storage (`s3:`), and use the + optional file name encryption, rclone will encrypt the bucket name. + + ### Changing password + + Should the password, or the configuration file containing a lightly obscured + form of the password, be compromised, you need to re-encrypt your data with + a new password. Since rclone uses secret-key encryption, where the encryption + key is generated directly from the password kept on the client, it is not + possible to change the password/key of already encrypted content. Just changing + the password configured for an existing crypt remote means you will no longer + able to decrypt any of the previously encrypted content. The only possibility + is to re-upload everything via a crypt remote configured with your new password. + + Depending on the size of your data, your bandwidth, storage quota etc, there are + different approaches you can take: + - If you have everything in a different location, for example on your local system, + you could remove all of the prior encrypted files, change the password for your + configured crypt remote (or delete and re-create the crypt configuration), + and then re-upload everything from the alternative location. + - If you have enough space on the storage system you can create a new crypt + remote pointing to a separate directory on the same backend, and then use + rclone to copy everything from the original crypt remote to the new, + effectively decrypting everything on the fly using the old password and + re-encrypting using the new password. When done, delete the original crypt + remote directory and finally the rclone crypt configuration with the old password. + All data will be streamed from the storage system and back, so you will + get half the bandwidth and be charged twice if you have upload and download quota + on the storage system. + + **Note**: A security problem related to the random password generator + was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated + by rclone config in version 1.49.0 (released 2019-08-26) to 1.53.2 + (released 2020-10-26) are not considered secure and should be changed. + If you made up your own password, or used rclone version older than 1.49.0 or + newer than 1.53.2 to generate it, you are *not* affected by this issue. + See [issue #4783](https://github.com/rclone/rclone/issues/4783) for more + details, and a tool you can use to check if you are affected. + + ### Example + + Create the following file structure using "standard" file name + encryption. + +plaintext/ ├── file0.txt ├── file1.txt └── subdir ├── file2.txt ├── +file3.txt └── subsubdir └── file4.txt + + + Copy these to the remote, and list them + +$ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 +file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 +subdir/file3.txt + + + The crypt remote looks like + +$ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 +v05749mltvv1tf4onltun46gls 57 +86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 +86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc +56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps + + + The directory structure is preserved + +$ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 +subsubdir/file4.txt + + + Without file name encryption `.bin` extensions are added to underlying + names. This prevents the cloud provider attempting to interpret file + content. + +$ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 +subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin + + + ### File name encryption modes + + Off + + * doesn't hide file names or directory structure + * allows for longer file names (~246 characters) + * can use sub paths and copy single files + + Standard + + * file names encrypted + * file names can't be as long (~143 characters) + * can use sub paths and copy single files + * directory structure visible + * identical files names will have identical uploaded names + * can use shortcuts to shorten the directory recursion + + Obfuscation + + This is a simple "rotate" of the filename, with each file having a rot + distance based on the filename. Rclone stores the distance at the + beginning of the filename. A file called "hello" may become "53.jgnnq". + + Obfuscation is not a strong encryption of filenames, but hinders + automated scanning tools picking up on filename patterns. It is an + intermediate between "off" and "standard" which allows for longer path + segment names. + + There is a possibility with some unicode based filenames that the + obfuscation is weak and may map lower case characters to upper case + equivalents. + + Obfuscation cannot be relied upon for strong protection. + + * file names very lightly obfuscated + * file names can be longer than standard encryption + * can use sub paths and copy single files + * directory structure visible + * identical files names will have identical uploaded names + + Cloud storage systems have limits on file name length and + total path length which rclone is more likely to breach using + "Standard" file name encryption. Where file names are less than 156 + characters in length issues should not be encountered, irrespective of + cloud storage provider. + + An experimental advanced option `filename_encoding` is now provided to + address this problem to a certain degree. + For cloud storage systems with case sensitive file names (e.g. Google Drive), + `base64` can be used to reduce file name length. + For cloud storage systems using UTF-16 to store file names internally + (e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce + file name length. + + An alternative, future rclone file name encryption mode may tolerate + backend provider path length limits. + + ### Directory name encryption + + Crypt offers the option of encrypting dir names or leaving them intact. + There are two options: + + True + + Encrypts the whole file path including directory names + Example: + `1/12/123.txt` is encrypted to + `p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0` + + False + + Only encrypts file names, skips directory names + Example: + `1/12/123.txt` is encrypted to + `1/12/qgm4avr35m5loi1th53ato71v0` + + + ### Modified time and hashes + + Crypt stores modification times using the underlying remote so support + depends on that. + + Hashes are not stored for crypt. However the data integrity is + protected by an extremely strong crypto authenticator. + + Use the `rclone cryptcheck` command to check the + integrity of an encrypted remote instead of `rclone check` which can't + check the checksums properly. + + + ### Standard options + + Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). + + #### --crypt-remote Remote to encrypt/decrypt. - Normally should contain a ':' and a path, eg "myremote:path/to/dir", + + Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Enter a string value. Press Enter for the default (""). - remote> remote:path + + Properties: + + - Config: remote + - Env Var: RCLONE_CRYPT_REMOTE + - Type: string + - Required: true + + #### --crypt-filename-encryption + How to encrypt the filenames. - Enter a string value. Press Enter for the default ("standard"). - Choose a number from below, or type in your own value. - / Encrypt the filenames. - 1 | See the docs for the details. - \ "standard" - 2 / Very simple filename obfuscation. - \ "obfuscate" - / Don't encrypt the file names. - 3 | Adds a ".bin" extension only. - \ "off" - filename_encryption> + + Properties: + + - Config: filename_encryption + - Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION + - Type: string + - Default: "standard" + - Examples: + - "standard" + - Encrypt the filenames. + - See the docs for the details. + - "obfuscate" + - Very simple filename obfuscation. + - "off" + - Don't encrypt the file names. + - Adds a ".bin", or "suffix" extension only. + + #### --crypt-directory-name-encryption + Option to either encrypt directory names or leave them intact. NB If filename_encryption is "off" then this option will do nothing. - Enter a boolean value (true or false). Press Enter for the default ("true"). - Choose a number from below, or type in your own value - 1 / Encrypt directory names. - \ "true" - 2 / Don't encrypt directory names, leave them intact. - \ "false" - directory_name_encryption> + + Properties: + + - Config: directory_name_encryption + - Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION + - Type: bool + - Default: true + - Examples: + - "true" + - Encrypt directory names. + - "false" + - Don't encrypt directory names, leave them intact. + + #### --crypt-password + Password or pass phrase for encryption. - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Password or pass phrase for salt. Optional but recommended. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_CRYPT_PASSWORD + - Type: string + - Required: true + + #### --crypt-password2 + + Password or pass phrase for salt. + + Optional but recommended. Should be different to the previous password. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g/n> g - Password strength in bits. - 64 is just about memorable - 128 is secure - 1024 is the maximum - Bits> 128 - Your password is: JAsJvRcgR-_veXNfy_sGmQ - Use this password? Please note that an obscured version of this - password (and not the password itself) will be stored under your - configuration file, so keep this generated password in a safe place. - y) Yes (default) - n) No - y/n> - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> - Remote config - -------------------- - [secret] - type = crypt - remote = remote:path - password = *** ENCRYPTED *** - password2 = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> - -Important The crypt password stored in rclone.conf is lightly obscured. -That only protects it from cursory inspection. It is not secure unless -configuration encryption of rclone.conf is specified. - -A long passphrase is recommended, or rclone config can generate a random -one. - -The obscured password is created using AES-CTR with a static key. The -salt is stored verbatim at the beginning of the obscured password. This -static key is shared between all versions of rclone. - -If you reconfigure rclone with the same passwords/passphrases elsewhere -it will be compatible, but the obscured version will be different due to -the different salt. - -Rclone does not encrypt - -- file length - this can be calculated within 16 bytes -- modification time - used for syncing - -Specifying the remote - -When configuring the remote to encrypt/decrypt, you may specify any -string that rclone accepts as a source/destination of other commands. - -The primary use case is to specify the path into an already configured -remote (e.g. remote:path/to/dir or remote:bucket), such that data in a -remote untrusted location can be stored encrypted. - -You may also specify a local filesystem path, such as /path/to/dir on -Linux, C:\path\to\dir on Windows. By creating a crypt remote pointing to -such a local filesystem path, you can use rclone as a utility for pure -local file encryption, for example to keep encrypted files on a -removable USB drive. - -Note: A string which do not contain a : will by rclone be treated as a -relative path in the local filesystem. For example, if you enter the -name remote without the trailing :, it will be treated as a subdirectory -of the current directory with name "remote". - -If a path remote:path/to/dir is specified, rclone stores encrypted files -in path/to/dir on the remote. With file name encryption, files saved to -secret:subdir/subfile are stored in the unencrypted path path/to/dir but -the subdir/subpath element is encrypted. - -The path you specify does not have to exist, rclone will create it when -needed. - -If you intend to use the wrapped remote both directly for keeping -unencrypted content, as well as through a crypt remote for encrypted -content, it is recommended to point the crypt remote to a separate -directory within the wrapped remote. If you use a bucket-based storage -system (e.g. Swift, S3, Google Compute Storage, B2) it is generally -advisable to wrap the crypt remote around a specific bucket (s3:bucket). -If wrapping around the entire root of the storage (s3:), and use the -optional file name encryption, rclone will encrypt the bucket name. - -Changing password - -Should the password, or the configuration file containing a lightly -obscured form of the password, be compromised, you need to re-encrypt -your data with a new password. Since rclone uses secret-key encryption, -where the encryption key is generated directly from the password kept on -the client, it is not possible to change the password/key of already -encrypted content. Just changing the password configured for an existing -crypt remote means you will no longer able to decrypt any of the -previously encrypted content. The only possibility is to re-upload -everything via a crypt remote configured with your new password. - -Depending on the size of your data, your bandwidth, storage quota etc, -there are different approaches you can take: - If you have everything in -a different location, for example on your local system, you could remove -all of the prior encrypted files, change the password for your -configured crypt remote (or delete and re-create the crypt -configuration), and then re-upload everything from the alternative -location. - If you have enough space on the storage system you can -create a new crypt remote pointing to a separate directory on the same -backend, and then use rclone to copy everything from the original crypt -remote to the new, effectively decrypting everything on the fly using -the old password and re-encrypting using the new password. When done, -delete the original crypt remote directory and finally the rclone crypt -configuration with the old password. All data will be streamed from the -storage system and back, so you will get half the bandwidth and be -charged twice if you have upload and download quota on the storage -system. - -Note: A security problem related to the random password generator was -fixed in rclone version 1.53.3 (released 2020-11-19). Passwords -generated by rclone config in version 1.49.0 (released 2019-08-26) to -1.53.2 (released 2020-10-26) are not considered secure and should be -changed. If you made up your own password, or used rclone version older -than 1.49.0 or newer than 1.53.2 to generate it, you are not affected by -this issue. See issue #4783 for more details, and a tool you can use to -check if you are affected. - -Example - -Create the following file structure using "standard" file name -encryption. - - plaintext/ - ├── file0.txt - ├── file1.txt - └── subdir - ├── file2.txt - ├── file3.txt - └── subsubdir - └── file4.txt - -Copy these to the remote, and list them - - $ rclone -q copy plaintext secret: - $ rclone -q ls secret: - 7 file1.txt - 6 file0.txt - 8 subdir/file2.txt - 10 subdir/subsubdir/file4.txt - 9 subdir/file3.txt - -The crypt remote looks like - - $ rclone -q ls remote:path - 55 hagjclgavj2mbiqm6u6cnjjqcg - 54 v05749mltvv1tf4onltun46gls - 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo - 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc - 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps - -The directory structure is preserved - - $ rclone -q ls secret:subdir - 8 file2.txt - 9 file3.txt - 10 subsubdir/file4.txt - -Without file name encryption .bin extensions are added to underlying -names. This prevents the cloud provider attempting to interpret file -content. - - $ rclone -q ls remote:path - 54 file0.txt.bin - 57 subdir/file3.txt.bin - 56 subdir/file2.txt.bin - 58 subdir/subsubdir/file4.txt.bin - 55 file1.txt.bin - -File name encryption modes - -Off - -- doesn't hide file names or directory structure -- allows for longer file names (~246 characters) -- can use sub paths and copy single files -Standard + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -- file names encrypted -- file names can't be as long (~143 characters) -- can use sub paths and copy single files -- directory structure visible -- identical files names will have identical uploaded names -- can use shortcuts to shorten the directory recursion - -Obfuscation - -This is a simple "rotate" of the filename, with each file having a rot -distance based on the filename. Rclone stores the distance at the -beginning of the filename. A file called "hello" may become "53.jgnnq". - -Obfuscation is not a strong encryption of filenames, but hinders -automated scanning tools picking up on filename patterns. It is an -intermediate between "off" and "standard" which allows for longer path -segment names. + Properties: -There is a possibility with some unicode based filenames that the -obfuscation is weak and may map lower case characters to upper case -equivalents. + - Config: password2 + - Env Var: RCLONE_CRYPT_PASSWORD2 + - Type: string + - Required: false -Obfuscation cannot be relied upon for strong protection. + ### Advanced options -- file names very lightly obfuscated -- file names can be longer than standard encryption -- can use sub paths and copy single files -- directory structure visible -- identical files names will have identical uploaded names + Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). -Cloud storage systems have limits on file name length and total path -length which rclone is more likely to breach using "Standard" file name -encryption. Where file names are less than 156 characters in length -issues should not be encountered, irrespective of cloud storage -provider. + #### --crypt-server-side-across-configs -An experimental advanced option filename_encoding is now provided to -address this problem to a certain degree. For cloud storage systems with -case sensitive file names (e.g. Google Drive), base64 can be used to -reduce file name length. For cloud storage systems using UTF-16 to store -file names internally (e.g. OneDrive, Dropbox), base32768 can be used to -drastically reduce file name length. + Deprecated: use --server-side-across-configs instead. -An alternative, future rclone file name encryption mode may tolerate -backend provider path length limits. + Allow server-side operations (e.g. copy) to work across different crypt configs. -Directory name encryption + Normally this option is not what you want, but if you have two crypts + pointing to the same backend you can use it. -Crypt offers the option of encrypting dir names or leaving them intact. -There are two options: + This can be used, for example, to change file name encryption type + without re-uploading all the data. Just make two crypt backends + pointing to two different directories with the single changed + parameter and use rclone move to move the files between the crypt + remotes. -True + Properties: -Encrypts the whole file path including directory names Example: -1/12/123.txt is encrypted to -p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0 + - Config: server_side_across_configs + - Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS + - Type: bool + - Default: false -False + #### --crypt-show-mapping -Only encrypts file names, skips directory names Example: 1/12/123.txt is -encrypted to 1/12/qgm4avr35m5loi1th53ato71v0 + For all files listed show how the names encrypt. -Modified time and hashes + If this flag is set then for each file that the remote is asked to + list, it will log (at level INFO) a line stating the decrypted file + name and the encrypted file name. -Crypt stores modification times using the underlying remote so support -depends on that. + This is so you can work out which encrypted names are which decrypted + names just in case you need to do something with the encrypted file + names, or for debugging purposes. -Hashes are not stored for crypt. However the data integrity is protected -by an extremely strong crypto authenticator. + Properties: -Use the rclone cryptcheck command to check the integrity of an encrypted -remote instead of rclone check which can't check the checksums properly. + - Config: show_mapping + - Env Var: RCLONE_CRYPT_SHOW_MAPPING + - Type: bool + - Default: false -Standard options + #### --crypt-no-data-encryption -Here are the Standard options specific to crypt (Encrypt/Decrypt a -remote). + Option to either encrypt file data or leave it unencrypted. ---crypt-remote + Properties: -Remote to encrypt/decrypt. + - Config: no_data_encryption + - Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION + - Type: bool + - Default: false + - Examples: + - "true" + - Don't encrypt file data, leave it unencrypted. + - "false" + - Encrypt file data. -Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", -"myremote:bucket" or maybe "myremote:" (not recommended). + #### --crypt-pass-bad-blocks -Properties: + If set this will pass bad blocks through as all 0. -- Config: remote -- Env Var: RCLONE_CRYPT_REMOTE -- Type: string -- Required: true + This should not be set in normal operation, it should only be set if + trying to recover an encrypted file with errors and it is desired to + recover as much of the file as possible. ---crypt-filename-encryption + Properties: -How to encrypt the filenames. + - Config: pass_bad_blocks + - Env Var: RCLONE_CRYPT_PASS_BAD_BLOCKS + - Type: bool + - Default: false -Properties: + #### --crypt-filename-encoding -- Config: filename_encryption -- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION -- Type: string -- Default: "standard" -- Examples: - - "standard" - - Encrypt the filenames. - - See the docs for the details. - - "obfuscate" - - Very simple filename obfuscation. - - "off" - - Don't encrypt the file names. - - Adds a ".bin", or "suffix" extension only. + How to encode the encrypted filename to text string. ---crypt-directory-name-encryption + This option could help with shortening the encrypted filename. The + suitable option would depend on the way your remote count the filename + length and if it's case sensitive. -Option to either encrypt directory names or leave them intact. + Properties: -NB If filename_encryption is "off" then this option will do nothing. + - Config: filename_encoding + - Env Var: RCLONE_CRYPT_FILENAME_ENCODING + - Type: string + - Default: "base32" + - Examples: + - "base32" + - Encode using base32. Suitable for all remote. + - "base64" + - Encode using base64. Suitable for case sensitive remote. + - "base32768" + - Encode using base32768. Suitable if your remote counts UTF-16 or + - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) -Properties: + #### --crypt-suffix -- Config: directory_name_encryption -- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION -- Type: bool -- Default: true -- Examples: - - "true" - - Encrypt directory names. - - "false" - - Don't encrypt directory names, leave them intact. + If this is set it will override the default suffix of ".bin". ---crypt-password + Setting suffix to "none" will result in an empty suffix. This may be useful + when the path length is critical. -Password or pass phrase for encryption. + Properties: -NB Input to this must be obscured - see rclone obscure. + - Config: suffix + - Env Var: RCLONE_CRYPT_SUFFIX + - Type: string + - Default: ".bin" -Properties: + ### Metadata -- Config: password -- Env Var: RCLONE_CRYPT_PASSWORD -- Type: string -- Required: true + Any metadata supported by the underlying remote is read and written. ---crypt-password2 + See the [metadata](https://rclone.org/docs/#metadata) docs for more info. -Password or pass phrase for salt. + ## Backend commands -Optional but recommended. Should be different to the previous password. + Here are the commands specific to the crypt backend. -NB Input to this must be obscured - see rclone obscure. + Run them with -Properties: + rclone backend COMMAND remote: -- Config: password2 -- Env Var: RCLONE_CRYPT_PASSWORD2 -- Type: string -- Required: false + The help below will explain what arguments each command takes. -Advanced options + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. -Here are the Advanced options specific to crypt (Encrypt/Decrypt a -remote). + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). ---crypt-server-side-across-configs + ### encode -Deprecated: use --server-side-across-configs instead. + Encode the given filename(s) -Allow server-side operations (e.g. copy) to work across different crypt -configs. + rclone backend encode remote: [options] [+] -Normally this option is not what you want, but if you have two crypts -pointing to the same backend you can use it. + This encodes the filenames given as arguments returning a list of + strings of the encoded results. -This can be used, for example, to change file name encryption type -without re-uploading all the data. Just make two crypt backends pointing -to two different directories with the single changed parameter and use -rclone move to move the files between the crypt remotes. + Usage Example: -Properties: + rclone backend encode crypt: file1 [file2...] + rclone rc backend/command command=encode fs=crypt: file1 [file2...] -- Config: server_side_across_configs -- Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS -- Type: bool -- Default: false ---crypt-show-mapping + ### decode -For all files listed show how the names encrypt. + Decode the given filename(s) -If this flag is set then for each file that the remote is asked to list, -it will log (at level INFO) a line stating the decrypted file name and -the encrypted file name. + rclone backend decode remote: [options] [+] -This is so you can work out which encrypted names are which decrypted -names just in case you need to do something with the encrypted file -names, or for debugging purposes. + This decodes the filenames given as arguments returning a list of + strings of the decoded results. It will return an error if any of the + inputs are invalid. -Properties: + Usage Example: -- Config: show_mapping -- Env Var: RCLONE_CRYPT_SHOW_MAPPING -- Type: bool -- Default: false + rclone backend decode crypt: encryptedfile1 [encryptedfile2...] + rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] ---crypt-no-data-encryption -Option to either encrypt file data or leave it unencrypted. -Properties: -- Config: no_data_encryption -- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION -- Type: bool -- Default: false -- Examples: - - "true" - - Don't encrypt file data, leave it unencrypted. - - "false" - - Encrypt file data. + ## Backing up an encrypted remote ---crypt-pass-bad-blocks + If you wish to backup an encrypted remote, it is recommended that you use + `rclone sync` on the encrypted files, and make sure the passwords are + the same in the new encrypted remote. -If set this will pass bad blocks through as all 0. + This will have the following advantages -This should not be set in normal operation, it should only be set if -trying to recover an encrypted file with errors and it is desired to -recover as much of the file as possible. + * `rclone sync` will check the checksums while copying + * you can use `rclone check` between the encrypted remotes + * you don't decrypt and encrypt unnecessarily -Properties: + For example, let's say you have your original remote at `remote:` with + the encrypted version at `eremote:` with path `remote:crypt`. You + would then set up the new remote `remote2:` and then the encrypted + version `eremote2:` with path `remote2:crypt` using the same passwords + as `eremote:`. -- Config: pass_bad_blocks -- Env Var: RCLONE_CRYPT_PASS_BAD_BLOCKS -- Type: bool -- Default: false + To sync the two remotes you would do ---crypt-filename-encoding + rclone sync --interactive remote:crypt remote2:crypt -How to encode the encrypted filename to text string. + And to check the integrity you would do -This option could help with shortening the encrypted filename. The -suitable option would depend on the way your remote count the filename -length and if it's case sensitive. + rclone check remote:crypt remote2:crypt -Properties: + ## File formats -- Config: filename_encoding -- Env Var: RCLONE_CRYPT_FILENAME_ENCODING -- Type: string -- Default: "base32" -- Examples: - - "base32" - - Encode using base32. Suitable for all remote. - - "base64" - - Encode using base64. Suitable for case sensitive remote. - - "base32768" - - Encode using base32768. Suitable if your remote counts - UTF-16 or - - Unicode codepoint instead of UTF-8 byte length. (Eg. - Onedrive, Dropbox) + ### File encryption ---crypt-suffix + Files are encrypted 1:1 source file to destination object. The file + has a header and is divided into chunks. -If this is set it will override the default suffix of ".bin". + #### Header -Setting suffix to "none" will result in an empty suffix. This may be -useful when the path length is critical. + * 8 bytes magic string `RCLONE\x00\x00` + * 24 bytes Nonce (IV) -Properties: + The initial nonce is generated from the operating systems crypto + strong random number generator. The nonce is incremented for each + chunk read making sure each nonce is unique for each block written. + The chance of a nonce being re-used is minuscule. If you wrote an + exabyte of data (10¹⁸ bytes) you would have a probability of + approximately 2×10⁻³² of re-using a nonce. -- Config: suffix -- Env Var: RCLONE_CRYPT_SUFFIX -- Type: string -- Default: ".bin" + #### Chunk -Metadata + Each chunk will contain 64 KiB of data, except for the last one which + may have less data. The data chunk is in standard NaCl SecretBox + format. SecretBox uses XSalsa20 and Poly1305 to encrypt and + authenticate messages. -Any metadata supported by the underlying remote is read and written. + Each chunk contains: -See the metadata docs for more info. + * 16 Bytes of Poly1305 authenticator + * 1 - 65536 bytes XSalsa20 encrypted data -Backend commands + 64k chunk size was chosen as the best performing chunk size (the + authenticator takes too much time below this and the performance drops + off due to cache effects above this). Note that these chunks are + buffered in memory so they can't be too big. -Here are the commands specific to the crypt backend. + This uses a 32 byte (256 bit key) key derived from the user password. -Run them with + #### Examples - rclone backend COMMAND remote: + 1 byte file will encrypt to -The help below will explain what arguments each command takes. + * 32 bytes header + * 17 bytes data chunk -See the backend command for more info on how to pass options and -arguments. + 49 bytes total -These can be run on a running backend using the rc command -backend/command. + 1 MiB (1048576 bytes) file will encrypt to -encode + * 32 bytes header + * 16 chunks of 65568 bytes -Encode the given filename(s) + 1049120 bytes total (a 0.05% overhead). This is the overhead for big + files. - rclone backend encode remote: [options] [+] + ### Name encryption -This encodes the filenames given as arguments returning a list of -strings of the encoded results. + File names are encrypted segment by segment - the path is broken up + into `/` separated strings and these are encrypted individually. -Usage Example: + File segments are padded using PKCS#7 to a multiple of 16 bytes + before encryption. - rclone backend encode crypt: file1 [file2...] - rclone rc backend/command command=encode fs=crypt: file1 [file2...] + They are then encrypted with EME using AES with 256 bit key. EME + (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 + paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. -decode + This makes for deterministic encryption which is what we want - the + same filename must encrypt to the same thing otherwise we can't find + it on the cloud storage system. -Decode the given filename(s) + This means that - rclone backend decode remote: [options] [+] + * filenames with the same name will encrypt the same + * filenames which start the same won't have a common prefix -This decodes the filenames given as arguments returning a list of -strings of the decoded results. It will return an error if any of the -inputs are invalid. + This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of + which are derived from the user password. -Usage Example: + After encryption they are written out using a modified version of + standard `base32` encoding as described in RFC4648. The standard + encoding is modified in two ways: - rclone backend decode crypt: encryptedfile1 [encryptedfile2...] - rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] + * it becomes lower case (no-one likes upper case filenames!) + * we strip the padding character `=` -Backing up an encrypted remote + `base32` is used rather than the more efficient `base64` so rclone can be + used on case insensitive remotes (e.g. Windows, Amazon Drive). -If you wish to backup an encrypted remote, it is recommended that you -use rclone sync on the encrypted files, and make sure the passwords are -the same in the new encrypted remote. + ### Key derivation -This will have the following advantages + Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with an + optional user supplied salt (password2) to derive the 32+32+16 = 80 + bytes of key material required. If the user doesn't supply a salt + then rclone uses an internal one. -- rclone sync will check the checksums while copying -- you can use rclone check between the encrypted remotes -- you don't decrypt and encrypt unnecessarily + `scrypt` makes it impractical to mount a dictionary attack on rclone + encrypted data. For full protection against this you should always use + a salt. -For example, let's say you have your original remote at remote: with the -encrypted version at eremote: with path remote:crypt. You would then set -up the new remote remote2: and then the encrypted version eremote2: with -path remote2:crypt using the same passwords as eremote:. + ## SEE ALSO -To sync the two remotes you would do + * [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/) - Show forward/reverse mapping of encrypted filenames - rclone sync --interactive remote:crypt remote2:crypt + # Compress -And to check the integrity you would do + ## Warning - rclone check remote:crypt remote2:crypt + This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is + at your own risk. Please understand the risks associated with using experimental code and don't use this remote in + critical applications. -File formats + The `Compress` remote adds compression to another remote. It is best used with remotes containing + many large compressible files. -File encryption + ## Configuration -Files are encrypted 1:1 source file to destination object. The file has -a header and is divided into chunks. + To use this remote, all you need to do is specify another remote and a compression mode to use: -Header +Current remotes: -- 8 bytes magic string RCLONE\x00\x00 -- 24 bytes Nonce (IV) +Name Type ==== ==== remote_to_press sometype -The initial nonce is generated from the operating systems crypto strong -random number generator. The nonce is incremented for each chunk read -making sure each nonce is unique for each block written. The chance of a -nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ -bytes) you would have a probability of approximately 2×10⁻³² of re-using -a nonce. +e) Edit existing remote $ rclone config +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> n name> compress ... 8 / Compress a + remote  "compress" ... Storage> compress ** See help for compress + backend at: https://rclone.org/compress/ ** -Chunk +Remote to compress. Enter a string value. Press Enter for the default +(""). remote> remote_to_press:subdir Compression mode. Enter a string +value. Press Enter for the default ("gzip"). Choose a number from below, +or type in your own value 1 / Gzip compression balanced for speed and +compression strength.  "gzip" compression_mode> gzip Edit advanced +config? (y/n) y) Yes n) No (default) y/n> n Remote config +-------------------- [compress] type = compress remote = +remote_to_press:subdir compression_mode = gzip -------------------- y) +Yes this is OK (default) e) Edit this remote d) Delete this remote +y/e/d> y -Each chunk will contain 64 KiB of data, except for the last one which -may have less data. The data chunk is in standard NaCl SecretBox format. -SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate -messages. -Each chunk contains: + ### Compression Modes -- 16 Bytes of Poly1305 authenticator -- 1 - 65536 bytes XSalsa20 encrypted data + Currently only gzip compression is supported. It provides a decent balance between speed and size and is well + supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no + compression and 9 is strongest compression. -64k chunk size was chosen as the best performing chunk size (the -authenticator takes too much time below this and the performance drops -off due to cache effects above this). Note that these chunks are -buffered in memory so they can't be too big. + ### File types -This uses a 32 byte (256 bit key) key derived from the user password. + If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to + the compression algorithm you chose. These files are standard files that can be opened by various archive programs, + but they have some hidden metadata that allows them to be used by rclone. + While you may download and decompress these files at will, do **not** manually delete or rename files. Files without + correct metadata files will not be recognized by rclone. -Examples + ### File names -1 byte file will encrypt to + The compressed files will be named `*.###########.gz` where `*` is the base file and the `#` part is base64 encoded + size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend. -- 32 bytes header -- 17 bytes data chunk -49 bytes total + ### Standard options -1 MiB (1048576 bytes) file will encrypt to + Here are the Standard options specific to compress (Compress a remote). -- 32 bytes header -- 16 chunks of 65568 bytes - -1049120 bytes total (a 0.05% overhead). This is the overhead for big -files. - -Name encryption - -File names are encrypted segment by segment - the path is broken up into -/ separated strings and these are encrypted individually. - -File segments are padded using PKCS#7 to a multiple of 16 bytes before -encryption. - -They are then encrypted with EME using AES with 256 bit key. EME -(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 -paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. - -This makes for deterministic encryption which is what we want - the same -filename must encrypt to the same thing otherwise we can't find it on -the cloud storage system. - -This means that - -- filenames with the same name will encrypt the same -- filenames which start the same won't have a common prefix - -This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of -which are derived from the user password. - -After encryption they are written out using a modified version of -standard base32 encoding as described in RFC4648. The standard encoding -is modified in two ways: - -- it becomes lower case (no-one likes upper case filenames!) -- we strip the padding character = - -base32 is used rather than the more efficient base64 so rclone can be -used on case insensitive remotes (e.g. Windows, Amazon Drive). - -Key derivation - -Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional -user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key -material required. If the user doesn't supply a salt then rclone uses an -internal one. - -scrypt makes it impractical to mount a dictionary attack on rclone -encrypted data. For full protection against this you should always use a -salt. - -SEE ALSO - -- rclone cryptdecode - Show forward/reverse mapping of encrypted - filenames - -Compress - -Warning - -This remote is currently experimental. Things may break and data may be -lost. Anything you do with this remote is at your own risk. Please -understand the risks associated with using experimental code and don't -use this remote in critical applications. - -The Compress remote adds compression to another remote. It is best used -with remotes containing many large compressible files. - -Configuration - -To use this remote, all you need to do is specify another remote and a -compression mode to use: - - Current remotes: - - Name Type - ==== ==== - remote_to_press sometype - - e) Edit existing remote - $ rclone config - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> n - name> compress - ... - 8 / Compress a remote - \ "compress" - ... - Storage> compress - ** See help for compress backend at: https://rclone.org/compress/ ** + #### --compress-remote Remote to compress. - Enter a string value. Press Enter for the default (""). - remote> remote_to_press:subdir + + Properties: + + - Config: remote + - Env Var: RCLONE_COMPRESS_REMOTE + - Type: string + - Required: true + + #### --compress-mode + Compression mode. - Enter a string value. Press Enter for the default ("gzip"). - Choose a number from below, or type in your own value - 1 / Gzip compression balanced for speed and compression strength. - \ "gzip" - compression_mode> gzip - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [compress] - type = compress - remote = remote_to_press:subdir - compression_mode = gzip - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -Compression Modes + Properties: -Currently only gzip compression is supported. It provides a decent -balance between speed and size and is well supported by other -applications. Compression strength can further be configured via an -advanced setting where 0 is no compression and 9 is strongest -compression. + - Config: mode + - Env Var: RCLONE_COMPRESS_MODE + - Type: string + - Default: "gzip" + - Examples: + - "gzip" + - Standard gzip compression with fastest parameters. -File types + ### Advanced options -If you open a remote wrapped by compress, you will see that there are -many files with an extension corresponding to the compression algorithm -you chose. These files are standard files that can be opened by various -archive programs, but they have some hidden metadata that allows them to -be used by rclone. While you may download and decompress these files at -will, do not manually delete or rename files. Files without correct -metadata files will not be recognized by rclone. + Here are the Advanced options specific to compress (Compress a remote). -File names + #### --compress-level -The compressed files will be named *.###########.gz where * is the base -file and the # part is base64 encoded size of the uncompressed file. The -file names should not be changed by anything other than the rclone -compression backend. + GZIP compression level (-2 to 9). -Standard options + Generally -1 (default, equivalent to 5) is recommended. + Levels 1 to 9 increase compression at the cost of speed. Going past 6 + generally offers very little return. -Here are the Standard options specific to compress (Compress a remote). + Level -2 uses Huffman encoding only. Only use if you know what you + are doing. + Level 0 turns off compression. ---compress-remote + Properties: -Remote to compress. + - Config: level + - Env Var: RCLONE_COMPRESS_LEVEL + - Type: int + - Default: -1 -Properties: + #### --compress-ram-cache-limit -- Config: remote -- Env Var: RCLONE_COMPRESS_REMOTE -- Type: string -- Required: true + Some remotes don't allow the upload of files with unknown size. + In this case the compressed file will need to be cached to determine + it's size. ---compress-mode + Files smaller than this limit will be cached in RAM, files larger than + this limit will be cached on disk. -Compression mode. + Properties: -Properties: + - Config: ram_cache_limit + - Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT + - Type: SizeSuffix + - Default: 20Mi -- Config: mode -- Env Var: RCLONE_COMPRESS_MODE -- Type: string -- Default: "gzip" -- Examples: - - "gzip" - - Standard gzip compression with fastest parameters. + ### Metadata -Advanced options + Any metadata supported by the underlying remote is read and written. -Here are the Advanced options specific to compress (Compress a remote). + See the [metadata](https://rclone.org/docs/#metadata) docs for more info. ---compress-level -GZIP compression level (-2 to 9). -Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 -increase compression at the cost of speed. Going past 6 generally offers -very little return. + # Combine -Level -2 uses Huffman encoding only. Only use if you know what you are -doing. Level 0 turns off compression. + The `combine` backend joins remotes together into a single directory + tree. -Properties: + For example you might have a remote for images on one provider: -- Config: level -- Env Var: RCLONE_COMPRESS_LEVEL -- Type: int -- Default: -1 +$ rclone tree s3:imagesbucket / ├── image1.jpg └── image2.jpg ---compress-ram-cache-limit -Some remotes don't allow the upload of files with unknown size. In this -case the compressed file will need to be cached to determine it's size. + And a remote for files on another: -Files smaller than this limit will be cached in RAM, files larger than -this limit will be cached on disk. +$ rclone tree drive:important/files / ├── file1.txt └── file2.txt -Properties: -- Config: ram_cache_limit -- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT -- Type: SizeSuffix -- Default: 20Mi + The `combine` backend can join these together into a synthetic + directory structure like this: -Metadata +$ rclone tree combined: / ├── files │ ├── file1.txt │ └── file2.txt └── +images ├── image1.jpg └── image2.jpg -Any metadata supported by the underlying remote is read and written. -See the metadata docs for more info. + You'd do this by specifying an `upstreams` parameter in the config + like this -Combine + upstreams = images=s3:imagesbucket files=drive:important/files -The combine backend joins remotes together into a single directory tree. + During the initial setup with `rclone config` you will specify the + upstreams remotes as a space separated list. The upstream remotes can + either be a local paths or other remotes. -For example you might have a remote for images on one provider: + ## Configuration - $ rclone tree s3:imagesbucket - / - ├── image1.jpg - └── image2.jpg + Here is an example of how to make a combine called `remote` for the + example above. First run: -And a remote for files on another: + rclone config - $ rclone tree drive:important/files - / - ├── file1.txt - └── file2.txt + This will guide you through an interactive setup process: -The combine backend can join these together into a synthetic directory -structure like this: +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. ... XX / Combine several remotes into one  (combine) ... Storage> +combine Option upstreams. Upstreams for combining These should be in the +form dir=remote:path dir2=remote2:path Where before the = is specified +the root directory and after is the remote to put there. Embedded spaces +can be added using quotes "dir=remote:path with space" +"dir2=remote2:path with space" Enter a fs.SpaceSepList value. upstreams> +images=s3:imagesbucket files=drive:important/files -------------------- +[remote] type = combine upstreams = images=s3:imagesbucket +files=drive:important/files -------------------- y) Yes this is OK +(default) e) Edit this remote d) Delete this remote y/e/d> y - $ rclone tree combined: - / - ├── files - │ ├── file1.txt - │ └── file2.txt - └── images - ├── image1.jpg - └── image2.jpg -You'd do this by specifying an upstreams parameter in the config like -this + ### Configuring for Google Drive Shared Drives - upstreams = images=s3:imagesbucket files=drive:important/files + Rclone has a convenience feature for making a combine backend for all + the shared drives you have access to. -During the initial setup with rclone config you will specify the -upstreams remotes as a space separated list. The upstream remotes can -either be a local paths or other remotes. + Assuming your main (non shared drive) Google drive remote is called + `drive:` you would run -Configuration + rclone backend -o config drives drive: -Here is an example of how to make a combine called remote for the -example above. First run: + This would produce something like this: - rclone config + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: -This will guide you through an interactive setup process: + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" + + If you then add that config to your config file (find it with `rclone + config file`) then you can access all the shared drives in one place + with the `AllDrives:` remote. + + See [the Google Drive docs](https://rclone.org/drive/#drives) for full info. + + + ### Standard options + + Here are the Standard options specific to combine (Combine several remotes into one). + + #### --combine-upstreams - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - ... - XX / Combine several remotes into one - \ (combine) - ... - Storage> combine - Option upstreams. Upstreams for combining + These should be in the form + dir=remote:path dir2=remote2:path + Where before the = is specified the root directory and after is the remote to put there. + Embedded spaces can be added using quotes + "dir=remote:path with space" "dir2=remote2:path with space" - Enter a fs.SpaceSepList value. - upstreams> images=s3:imagesbucket files=drive:important/files - -------------------- - [remote] - type = combine - upstreams = images=s3:imagesbucket files=drive:important/files - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -Configuring for Google Drive Shared Drives -Rclone has a convenience feature for making a combine backend for all -the shared drives you have access to. -Assuming your main (non shared drive) Google drive remote is called -drive: you would run + Properties: - rclone backend -o config drives drive: + - Config: upstreams + - Env Var: RCLONE_COMBINE_UPSTREAMS + - Type: SpaceSepList + - Default: -This would produce something like this: + ### Metadata - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + Any metadata supported by the underlying remote is read and written. - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" -If you then add that config to your config file (find it with -rclone config file) then you can access all the shared drives in one -place with the AllDrives: remote. -See the Google Drive docs for full info. + # Dropbox -Standard options + Paths are specified as `remote:path` -Here are the Standard options specific to combine (Combine several -remotes into one). + Dropbox paths may be as deep as required, e.g. + `remote:directory/subdirectory`. ---combine-upstreams + ## Configuration -Upstreams for combining + The initial setup for dropbox involves getting a token from Dropbox + which you need to do in your browser. `rclone config` walks you + through it. -These should be in the form + Here is an example of how to make a remote called `remote`. First run: - dir=remote:path dir2=remote2:path + rclone config -Where before the = is specified the root directory and after is the -remote to put there. + This will guide you through an interactive setup process: -Embedded spaces can be added using quotes - - "dir=remote:path with space" "dir2=remote2:path with space" - -Properties: - -- Config: upstreams -- Env Var: RCLONE_COMBINE_UPSTREAMS -- Type: SpaceSepList -- Default: - -Metadata - -Any metadata supported by the underlying remote is read and written. - -See the metadata docs for more info. - -Dropbox - -Paths are specified as remote:path - -Dropbox paths may be as deep as required, e.g. -remote:directory/subdirectory. - -Configuration - -The initial setup for dropbox involves getting a token from Dropbox -which you need to do in your browser. rclone config walks you through -it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Dropbox - \ "dropbox" - [snip] - Storage> dropbox - Dropbox App Key - leave blank normally. - app_key> - Dropbox App Secret - leave blank normally. - app_secret> - Remote config - Please visit: +n) New remote +o) Delete remote +p) Quit config e/n/d/q> n name> remote Type of storage to configure. + Choose a number from below, or type in your own value [snip] XX / + Dropbox  "dropbox" [snip] Storage> dropbox Dropbox App Key - leave + blank normally. app_key> Dropbox App Secret - leave blank normally. + app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX + -------------------- [remote] app_key = app_secret = token = + XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------- - [remote] - app_key = - app_secret = - token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y +q) Yes this is OK +r) Edit this remote +s) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. -Note that rclone runs a webserver on your local machine to collect the -token as returned from Dropbox. This only runs from the moment it opens -your browser to the moment you get back the verification code. This is -on http://127.0.0.1:53682/ and it may require you to unblock it -temporarily if you are running a host firewall, or use manual mode. + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. -You can then use it like this, + Note that rclone runs a webserver on your local machine to collect the + token as returned from Dropbox. This only + runs from the moment it opens your browser to the moment you get back + the verification code. This is on `http://127.0.0.1:53682/` and it + may require you to unblock it temporarily if you are running a host + firewall, or use manual mode. -List directories in top level of your dropbox + You can then use it like this, - rclone lsd remote: + List directories in top level of your dropbox -List all the files in your dropbox + rclone lsd remote: - rclone ls remote: + List all the files in your dropbox -To copy a local directory to a dropbox directory called backup + rclone ls remote: - rclone copy /home/source remote:backup + To copy a local directory to a dropbox directory called backup -Dropbox for business + rclone copy /home/source remote:backup -Rclone supports Dropbox for business and Team Folders. + ### Dropbox for business -When using Dropbox for business remote: and remote:path/to/file will -refer to your personal folder. + Rclone supports Dropbox for business and Team Folders. -If you wish to see Team Folders you must use a leading / in the path, so -rclone lsd remote:/ will refer to the root and show you all Team Folders -and your User Folder. + When using Dropbox for business `remote:` and `remote:path/to/file` + will refer to your personal folder. -You can then use team folders like this remote:/TeamFolder and -remote:/TeamFolder/path/to/file. + If you wish to see Team Folders you must use a leading `/` in the + path, so `rclone lsd remote:/` will refer to the root and show you all + Team Folders and your User Folder. -A leading / for a Dropbox personal account will do nothing, but it will -take an extra HTTP transaction so it should be avoided. + You can then use team folders like this `remote:/TeamFolder` and + `remote:/TeamFolder/path/to/file`. -Modified time and Hashes + A leading `/` for a Dropbox personal account will do nothing, but it + will take an extra HTTP transaction so it should be avoided. -Dropbox supports modified times, but the only way to set a modification -time is to re-upload the file. + ### Modified time and Hashes -This means that if you uploaded your data with an older version of -rclone which didn't support the v2 API and modified times, rclone will -decide to upload all your old data to fix the modification times. If you -don't want this to happen use --size-only or --checksum flag to stop it. + Dropbox supports modified times, but the only way to set a + modification time is to re-upload the file. -Dropbox supports its own hash type which is checked for all transfers. + This means that if you uploaded your data with an older version of + rclone which didn't support the v2 API and modified times, rclone will + decide to upload all your old data to fix the modification times. If + you don't want this to happen use `--size-only` or `--checksum` flag + to stop it. -Restricted filename characters + Dropbox supports [its own hash + type](https://www.dropbox.com/developers/reference/content-hash) which + is checked for all transfers. - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - / 0x2F / - DEL 0x7F ␡ - \ 0x5C \ + ### Restricted filename characters -File names can also not end with the following characters. These only -get replaced if they are the last character in the name: + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | NUL | 0x00 | ␀ | + | / | 0x2F | / | + | DEL | 0x7F | ␡ | + | \ | 0x5C | \ | - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ + File names can also not end with the following characters. + These only get replaced if they are the last character in the name: -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | -Batch mode uploads + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Using batch mode uploads is very important for performance when using -the Dropbox API. See the dropbox performance guide for more info. + ### Batch mode uploads {#batch-mode} -There are 3 modes rclone can use for uploads. + Using batch mode uploads is very important for performance when using + the Dropbox API. See [the dropbox performance guide](https://developers.dropbox.com/dbx-performance-guide) + for more info. ---dropbox-batch-mode off + There are 3 modes rclone can use for uploads. -In this mode rclone will not use upload batching. This was the default -before rclone v1.55. It has the disadvantage that it is very likely to -encounter too_many_requests errors like this + #### --dropbox-batch-mode off - NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. + In this mode rclone will not use upload batching. This was the default + before rclone v1.55. It has the disadvantage that it is very likely to + encounter `too_many_requests` errors like this -When rclone receives these it has to wait for 15s or sometimes 300s -before continuing which really slows down transfers. + NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. -This will happen especially if --transfers is large, so this mode isn't -recommended except for compatibility or investigating problems. + When rclone receives these it has to wait for 15s or sometimes 300s + before continuing which really slows down transfers. ---dropbox-batch-mode sync + This will happen especially if `--transfers` is large, so this mode + isn't recommended except for compatibility or investigating problems. -In this mode rclone will batch up uploads to the size specified by ---dropbox-batch-size and commit them together. + #### --dropbox-batch-mode sync -Using this mode means you can use a much higher --transfers parameter -(32 or 64 works fine) without receiving too_many_requests errors. + In this mode rclone will batch up uploads to the size specified by + `--dropbox-batch-size` and commit them together. -This mode ensures full data integrity. + Using this mode means you can use a much higher `--transfers` + parameter (32 or 64 works fine) without receiving `too_many_requests` + errors. -Note that there may be a pause when quitting rclone while rclone -finishes up the last batch using this mode. + This mode ensures full data integrity. ---dropbox-batch-mode async + Note that there may be a pause when quitting rclone while rclone + finishes up the last batch using this mode. -In this mode rclone will batch up uploads to the size specified by ---dropbox-batch-size and commit them together. + #### --dropbox-batch-mode async -However it will not wait for the status of the batch to be returned to -the caller. This means rclone can use a much bigger batch size (much -bigger than --transfers), at the cost of not being able to check the -status of the upload. + In this mode rclone will batch up uploads to the size specified by + `--dropbox-batch-size` and commit them together. -This provides the maximum possible upload speed especially with lots of -small files, however rclone can't check the file got uploaded properly -using this mode. + However it will not wait for the status of the batch to be returned to + the caller. This means rclone can use a much bigger batch size (much + bigger than `--transfers`), at the cost of not being able to check the + status of the upload. -If you are using this mode then using "rclone check" after the transfer -completes is recommended. Or you could do an initial transfer with ---dropbox-batch-mode async then do a final transfer with ---dropbox-batch-mode sync (the default). + This provides the maximum possible upload speed especially with lots + of small files, however rclone can't check the file got uploaded + properly using this mode. -Note that there may be a pause when quitting rclone while rclone -finishes up the last batch using this mode. + If you are using this mode then using "rclone check" after the + transfer completes is recommended. Or you could do an initial transfer + with `--dropbox-batch-mode async` then do a final transfer with + `--dropbox-batch-mode sync` (the default). -Standard options + Note that there may be a pause when quitting rclone while rclone + finishes up the last batch using this mode. -Here are the Standard options specific to dropbox (Dropbox). ---dropbox-client-id -OAuth Client Id. + ### Standard options -Leave blank normally. + Here are the Standard options specific to dropbox (Dropbox). -Properties: + #### --dropbox-client-id -- Config: client_id -- Env Var: RCLONE_DROPBOX_CLIENT_ID -- Type: string -- Required: false + OAuth Client Id. ---dropbox-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_DROPBOX_CLIENT_SECRET -- Type: string -- Required: false - -Advanced options - -Here are the Advanced options specific to dropbox (Dropbox). - ---dropbox-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_DROPBOX_TOKEN -- Type: string -- Required: false - ---dropbox-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_DROPBOX_AUTH_URL -- Type: string -- Required: false - ---dropbox-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_DROPBOX_TOKEN_URL -- Type: string -- Required: false - ---dropbox-chunk-size - -Upload chunk size (< 150Mi). - -Any files larger than this will be uploaded in chunks of this size. - -Note that chunks are buffered in memory (one at a time) so rclone can -deal with retries. Setting this larger will increase the speed slightly -(at most 10% for 128 MiB in tests) at the cost of using more memory. It -can be set smaller if you are tight on memory. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_DROPBOX_CHUNK_SIZE -- Type: SizeSuffix -- Default: 48Mi - ---dropbox-impersonate - -Impersonate this user when using a business account. - -Note that if you want to use impersonate, you should make sure this flag -is set when running "rclone config" as this will cause rclone to request -the "members.read" scope which it won't normally. This is needed to -lookup a members email address into the internal ID that dropbox uses in -the API. - -Using the "members.read" scope will require a Dropbox Team Admin to -approve during the OAuth flow. - -You will have to use your own App (setting your own client_id and -client_secret) to use this option as currently rclone's default set of -permissions doesn't include "members.read". This can be added once v1.55 -or later is in use everywhere. - -Properties: - -- Config: impersonate -- Env Var: RCLONE_DROPBOX_IMPERSONATE -- Type: string -- Required: false - ---dropbox-shared-files - -Instructs rclone to work on individual shared files. - -In this mode rclone's features are extremely limited - only list (ls, -lsl, etc.) operations and read operations (e.g. downloading) are -supported in this mode. All other operations will be disabled. - -Properties: - -- Config: shared_files -- Env Var: RCLONE_DROPBOX_SHARED_FILES -- Type: bool -- Default: false - ---dropbox-shared-folders - -Instructs rclone to work on shared folders. - -When this flag is used with no path only the List operation is supported -and all available shared folders will be listed. If you specify a path -the first part will be interpreted as the name of shared folder. Rclone -will then try to mount this shared to the root namespace. On success -shared folder rclone proceeds normally. The shared folder is now pretty -much a normal folder and all normal operations are supported. - -Note that we don't unmount the shared folder afterwards so the ---dropbox-shared-folders can be omitted after the first use of a -particular shared folder. - -Properties: - -- Config: shared_folders -- Env Var: RCLONE_DROPBOX_SHARED_FOLDERS -- Type: bool -- Default: false - ---dropbox-batch-mode - -Upload file batching sync|async|off. - -This sets the batch mode used by rclone. - -For full info see the main docs - -This has 3 possible values - -- off - no batching -- sync - batch uploads and check completion (default) -- async - batch upload and don't check completion - -Rclone will close any outstanding batches when it exits which may make a -delay on quit. - -Properties: - -- Config: batch_mode -- Env Var: RCLONE_DROPBOX_BATCH_MODE -- Type: string -- Default: "sync" - ---dropbox-batch-size - -Max number of files in upload batch. - -This sets the batch size of files to upload. It has to be less than -1000. - -By default this is 0 which means rclone which calculate the batch size -depending on the setting of batch_mode. - -- batch_mode: async - default batch_size is 100 -- batch_mode: sync - default batch_size is the same as --transfers -- batch_mode: off - not in use - -Rclone will close any outstanding batches when it exits which may make a -delay on quit. - -Setting this is a great idea if you are uploading lots of small files as -it will make them a lot quicker. You can use --transfers 32 to maximise -throughput. - -Properties: - -- Config: batch_size -- Env Var: RCLONE_DROPBOX_BATCH_SIZE -- Type: int -- Default: 0 - ---dropbox-batch-timeout - -Max time to allow an idle upload batch before uploading. - -If an upload batch is idle for more than this long then it will be -uploaded. - -The default for this is 0 which means rclone will choose a sensible -default based on the batch_mode in use. - -- batch_mode: async - default batch_timeout is 10s -- batch_mode: sync - default batch_timeout is 500ms -- batch_mode: off - not in use - -Properties: - -- Config: batch_timeout -- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT -- Type: Duration -- Default: 0s - ---dropbox-batch-commit-timeout - -Max time to wait for a batch to finish committing - -Properties: - -- Config: batch_commit_timeout -- Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT -- Type: Duration -- Default: 10m0s - ---dropbox-pacer-min-sleep - -Minimum time to sleep between API calls. - -Properties: - -- Config: pacer_min_sleep -- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP -- Type: Duration -- Default: 10ms - ---dropbox-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_DROPBOX_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot - -Limitations - -Note that Dropbox is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -There are some file names such as thumbs.db which Dropbox can't store. -There is a full list of them in the "Ignored Files" section of this -document. Rclone will issue an error message -File name disallowed - not uploading if it attempts to upload one of -those file names, but the sync won't fail. - -Some errors may occur if you try to sync copyright-protected files -because Dropbox has its own copyright detector that prevents this sort -of file being downloaded. This will return the error -ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/. - -If you have more than 10,000 files in a directory then -rclone purge dropbox:dir will return the error -Failed to purge: There are too many files involved in this operation. As -a work-around do an rclone delete dropbox:dir followed by an -rclone rmdir dropbox:dir. - -When using rclone link you'll need to set --expire if using a -non-personal account otherwise the visibility may not be correct. (Note -that --expire isn't supported on personal accounts). See the forum -discussion and the dropbox SDK issue. - -Get your own Dropbox App ID - -When you use rclone with Dropbox in its default configuration you are -using rclone's App ID. This is shared between all the rclone users. - -Here is how to create your own Dropbox App ID for rclone: - -1. Log into the Dropbox App console with your Dropbox Account (It need - not to be the same account as the Dropbox you want to access) - -2. Choose an API => Usually this should be Dropbox API - -3. Choose the type of access you want to use => Full Dropbox or - App Folder - -4. Name your App. The app name is global, so you can't use rclone for - example - -5. Click the button Create App - -6. Switch to the Permissions tab. Enable at least the following - permissions: account_info.read, files.metadata.write, - files.content.write, files.content.read, sharing.write. The - files.metadata.read and sharing.read checkboxes will be marked too. - Click Submit - -7. Switch to the Settings tab. Fill OAuth2 - Redirect URIs as - http://localhost:53682/ - -8. Find the App key and App secret values on the Settings tab. Use - these values in rclone config to add a new remote or edit an - existing remote. The App key setting corresponds to client_id in - rclone config, the App secret corresponds to client_secret - -Enterprise File Fabric - -This backend supports Storage Made Easy's Enterprise File Fabric™ which -provides a software solution to integrate and unify File and Object -Storage accessible through a global file system. - -Configuration - -The initial setup for the Enterprise File Fabric backend involves -getting a token from the Enterprise File Fabric which you need to do in -your browser. rclone config walks you through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Enterprise File Fabric - \ "filefabric" - [snip] - Storage> filefabric - ** See help for filefabric backend at: https://rclone.org/filefabric/ ** - - URL of the Enterprise File Fabric to connect to - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Storage Made Easy US - \ "https://storagemadeeasy.com" - 2 / Storage Made Easy EU - \ "https://eu.storagemadeeasy.com" - 3 / Connect to your Enterprise File Fabric - \ "https://yourfabric.smestorage.com" - url> https://yourfabric.smestorage.com/ - ID of the root folder Leave blank normally. - Fill in to make rclone start with directory of a given ID. + Properties: - Enter a string value. Press Enter for the default (""). - root_folder_id> - Permanent Authentication Token + - Config: client_id + - Env Var: RCLONE_DROPBOX_CLIENT_ID + - Type: string + - Required: false - A Permanent Authentication Token can be created in the Enterprise File - Fabric, on the users Dashboard under Security, there is an entry - you'll see called "My Authentication Tokens". Click the Manage button - to create one. + #### --dropbox-client-secret - These tokens are normally valid for several years. + OAuth Client Secret. - For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens + Leave blank normally. - Enter a string value. Press Enter for the default (""). - permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [remote] - type = filefabric - url = https://yourfabric.smestorage.com/ - permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y + Properties: -Once configured you can then use rclone like this, + - Config: client_secret + - Env Var: RCLONE_DROPBOX_CLIENT_SECRET + - Type: string + - Required: false -List directories in top level of your Enterprise File Fabric + ### Advanced options - rclone lsd remote: + Here are the Advanced options specific to dropbox (Dropbox). -List all the files in your Enterprise File Fabric + #### --dropbox-token - rclone ls remote: + OAuth Access Token as a JSON blob. -To copy a local directory to an Enterprise File Fabric directory called -backup + Properties: - rclone copy /home/source remote:backup + - Config: token + - Env Var: RCLONE_DROPBOX_TOKEN + - Type: string + - Required: false -Modified time and hashes + #### --dropbox-auth-url -The Enterprise File Fabric allows modification times to be set on files -accurate to 1 second. These will be used to detect whether objects need -syncing or not. + Auth server URL. -The Enterprise File Fabric does not support any data hashes at this -time. + Leave blank to use the provider defaults. -Restricted filename characters + Properties: -The default restricted characters set will be replaced. + - Config: auth_url + - Env Var: RCLONE_DROPBOX_AUTH_URL + - Type: string + - Required: false -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + #### --dropbox-token-url -Empty files + Token server url. -Empty files aren't supported by the Enterprise File Fabric. Rclone will -therefore upload an empty file as a single space with a mime type of -application/vnd.rclone.empty.file and files with that mime type are -treated as empty. + Leave blank to use the provider defaults. -Root folder ID + Properties: -You can set the root_folder_id for rclone. This is the directory -(identified by its Folder ID) that rclone considers to be the root of -your Enterprise File Fabric. + - Config: token_url + - Env Var: RCLONE_DROPBOX_TOKEN_URL + - Type: string + - Required: false -Normally you will leave this blank and rclone will determine the correct -root to use itself. + #### --dropbox-chunk-size -However you can set this to restrict rclone to a specific folder -hierarchy. + Upload chunk size (< 150Mi). -In order to do this you will have to find the Folder ID of the directory -you wish rclone to display. These aren't displayed in the web interface, -but you can use rclone lsf to find them, for example + Any files larger than this will be uploaded in chunks of this size. - $ rclone lsf --dirs-only -Fip --csv filefabric: - 120673758,Burnt PDFs/ - 120673759,My Quick Uploads/ - 120673755,My Syncs/ - 120673756,My backups/ - 120673757,My contacts/ - 120673761,S3 Storage/ + Note that chunks are buffered in memory (one at a time) so rclone can + deal with retries. Setting this larger will increase the speed + slightly (at most 10% for 128 MiB in tests) at the cost of using more + memory. It can be set smaller if you are tight on memory. -The ID for "S3 Storage" would be 120673761. + Properties: -Standard options + - Config: chunk_size + - Env Var: RCLONE_DROPBOX_CHUNK_SIZE + - Type: SizeSuffix + - Default: 48Mi -Here are the Standard options specific to filefabric (Enterprise File -Fabric). + #### --dropbox-impersonate ---filefabric-url + Impersonate this user when using a business account. -URL of the Enterprise File Fabric to connect to. + Note that if you want to use impersonate, you should make sure this + flag is set when running "rclone config" as this will cause rclone to + request the "members.read" scope which it won't normally. This is + needed to lookup a members email address into the internal ID that + dropbox uses in the API. -Properties: + Using the "members.read" scope will require a Dropbox Team Admin + to approve during the OAuth flow. -- Config: url -- Env Var: RCLONE_FILEFABRIC_URL -- Type: string -- Required: true -- Examples: - - "https://storagemadeeasy.com" - - Storage Made Easy US - - "https://eu.storagemadeeasy.com" - - Storage Made Easy EU - - "https://yourfabric.smestorage.com" - - Connect to your Enterprise File Fabric + You will have to use your own App (setting your own client_id and + client_secret) to use this option as currently rclone's default set of + permissions doesn't include "members.read". This can be added once + v1.55 or later is in use everywhere. ---filefabric-root-folder-id -ID of the root folder. + Properties: -Leave blank normally. + - Config: impersonate + - Env Var: RCLONE_DROPBOX_IMPERSONATE + - Type: string + - Required: false + + #### --dropbox-shared-files + + Instructs rclone to work on individual shared files. + + In this mode rclone's features are extremely limited - only list (ls, lsl, etc.) + operations and read operations (e.g. downloading) are supported in this mode. + All other operations will be disabled. + + Properties: + + - Config: shared_files + - Env Var: RCLONE_DROPBOX_SHARED_FILES + - Type: bool + - Default: false + + #### --dropbox-shared-folders + + Instructs rclone to work on shared folders. + + When this flag is used with no path only the List operation is supported and + all available shared folders will be listed. If you specify a path the first part + will be interpreted as the name of shared folder. Rclone will then try to mount this + shared to the root namespace. On success shared folder rclone proceeds normally. + The shared folder is now pretty much a normal folder and all normal operations + are supported. + + Note that we don't unmount the shared folder afterwards so the + --dropbox-shared-folders can be omitted after the first use of a particular + shared folder. + + Properties: + + - Config: shared_folders + - Env Var: RCLONE_DROPBOX_SHARED_FOLDERS + - Type: bool + - Default: false + + #### --dropbox-batch-mode + + Upload file batching sync|async|off. + + This sets the batch mode used by rclone. + + For full info see [the main docs](https://rclone.org/dropbox/#batch-mode) + + This has 3 possible values + + - off - no batching + - sync - batch uploads and check completion (default) + - async - batch upload and don't check completion + + Rclone will close any outstanding batches when it exits which may make + a delay on quit. + + + Properties: + + - Config: batch_mode + - Env Var: RCLONE_DROPBOX_BATCH_MODE + - Type: string + - Default: "sync" + + #### --dropbox-batch-size + + Max number of files in upload batch. + + This sets the batch size of files to upload. It has to be less than 1000. + + By default this is 0 which means rclone which calculate the batch size + depending on the setting of batch_mode. + + - batch_mode: async - default batch_size is 100 + - batch_mode: sync - default batch_size is the same as --transfers + - batch_mode: off - not in use + + Rclone will close any outstanding batches when it exits which may make + a delay on quit. + + Setting this is a great idea if you are uploading lots of small files + as it will make them a lot quicker. You can use --transfers 32 to + maximise throughput. + + + Properties: + + - Config: batch_size + - Env Var: RCLONE_DROPBOX_BATCH_SIZE + - Type: int + - Default: 0 + + #### --dropbox-batch-timeout + + Max time to allow an idle upload batch before uploading. + + If an upload batch is idle for more than this long then it will be + uploaded. + + The default for this is 0 which means rclone will choose a sensible + default based on the batch_mode in use. + + - batch_mode: async - default batch_timeout is 10s + - batch_mode: sync - default batch_timeout is 500ms + - batch_mode: off - not in use + + + Properties: + + - Config: batch_timeout + - Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT + - Type: Duration + - Default: 0s + + #### --dropbox-batch-commit-timeout + + Max time to wait for a batch to finish committing + + Properties: + + - Config: batch_commit_timeout + - Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT + - Type: Duration + - Default: 10m0s + + #### --dropbox-pacer-min-sleep + + Minimum time to sleep between API calls. + + Properties: + + - Config: pacer_min_sleep + - Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP + - Type: Duration + - Default: 10ms + + #### --dropbox-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_DROPBOX_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot + + + + ## Limitations + + Note that Dropbox is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". + + There are some file names such as `thumbs.db` which Dropbox can't + store. There is a full list of them in the ["Ignored Files" section + of this document](https://www.dropbox.com/en/help/145). Rclone will + issue an error message `File name disallowed - not uploading` if it + attempts to upload one of those file names, but the sync won't fail. + + Some errors may occur if you try to sync copyright-protected files + because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that + prevents this sort of file being downloaded. This will return the error `ERROR : + /path/to/your/file: Failed to copy: failed to open source object: + path/restricted_content/.` + + If you have more than 10,000 files in a directory then `rclone purge + dropbox:dir` will return the error `Failed to purge: There are too + many files involved in this operation`. As a work-around do an + `rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. + + When using `rclone link` you'll need to set `--expire` if using a + non-personal account otherwise the visibility may not be correct. + (Note that `--expire` isn't supported on personal accounts). See the + [forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the + [dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75). + + ## Get your own Dropbox App ID + + When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. + + Here is how to create your own Dropbox App ID for rclone: + + 1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not + to be the same account as the Dropbox you want to access) + + 2. Choose an API => Usually this should be `Dropbox API` + + 3. Choose the type of access you want to use => `Full Dropbox` or `App Folder`. If you want to use Team Folders, `Full Dropbox` is required ([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)). + + 4. Name your App. The app name is global, so you can't use `rclone` for example + + 5. Click the button `Create App` + + 6. Switch to the `Permissions` tab. Enable at least the following permissions: `account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, `sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be marked too. Click `Submit` + + 7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` and click on `Add` + + 8. Find the `App key` and `App secret` values on the `Settings` tab. Use these values in rclone config to add a new remote or edit an existing remote. The `App key` setting corresponds to `client_id` in rclone config, the `App secret` corresponds to `client_secret` + + # Enterprise File Fabric + + This backend supports [Storage Made Easy's Enterprise File + Fabric™](https://storagemadeeasy.com/about/) which provides a software + solution to integrate and unify File and Object Storage accessible + through a global file system. + + ## Configuration + + The initial setup for the Enterprise File Fabric backend involves + getting a token from the Enterprise File Fabric which you need to + do in your browser. `rclone config` walks you through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / +Enterprise File Fabric  "filefabric" [snip] Storage> filefabric ** See +help for filefabric backend at: https://rclone.org/filefabric/ ** + +URL of the Enterprise File Fabric to connect to Enter a string value. +Press Enter for the default (""). Choose a number from below, or type in +your own value 1 / Storage Made Easy US  "https://storagemadeeasy.com" 2 +/ Storage Made Easy EU  "https://eu.storagemadeeasy.com" 3 / Connect to +your Enterprise File Fabric  "https://yourfabric.smestorage.com" url> +https://yourfabric.smestorage.com/ ID of the root folder Leave blank +normally. Fill in to make rclone start with directory of a given ID. -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID -- Type: string -- Required: false - ---filefabric-permanent-token - -Permanent Authentication Token. +Enter a string value. Press Enter for the default (""). root_folder_id> +Permanent Authentication Token A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry you'll @@ -27443,11901 +29620,11995 @@ These tokens are normally valid for several years. For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens -Properties: +Enter a string value. Press Enter for the default (""). permanent_token> +xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx Edit advanced config? (y/n) y) Yes n) +No (default) y/n> n Remote config -------------------- [remote] type = +filefabric url = https://yourfabric.smestorage.com/ permanent_token = +xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx -------------------- y) Yes this is OK +(default) e) Edit this remote d) Delete this remote y/e/d> y -- Config: permanent_token -- Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN -- Type: string -- Required: false -Advanced options + Once configured you can then use `rclone` like this, -Here are the Advanced options specific to filefabric (Enterprise File -Fabric). + List directories in top level of your Enterprise File Fabric ---filefabric-token + rclone lsd remote: -Session Token. + List all the files in your Enterprise File Fabric -This is a session token which rclone caches in the config file. It is -usually valid for 1 hour. + rclone ls remote: -Don't set this value - rclone will set it automatically. + To copy a local directory to an Enterprise File Fabric directory called backup -Properties: + rclone copy /home/source remote:backup -- Config: token -- Env Var: RCLONE_FILEFABRIC_TOKEN -- Type: string -- Required: false + ### Modified time and hashes ---filefabric-token-expiry + The Enterprise File Fabric allows modification times to be set on + files accurate to 1 second. These will be used to detect whether + objects need syncing or not. -Token expiry time. + The Enterprise File Fabric does not support any data hashes at this time. -Don't set this value - rclone will set it automatically. + ### Restricted filename characters -Properties: + The [default restricted characters set](https://rclone.org/overview/#restricted-characters) + will be replaced. -- Config: token_expiry -- Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY -- Type: string -- Required: false + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. ---filefabric-version + ### Empty files -Version read from the file fabric. + Empty files aren't supported by the Enterprise File Fabric. Rclone will therefore + upload an empty file as a single space with a mime type of + `application/vnd.rclone.empty.file` and files with that mime type are + treated as empty. -Don't set this value - rclone will set it automatically. + ### Root folder ID ### -Properties: + You can set the `root_folder_id` for rclone. This is the directory + (identified by its `Folder ID`) that rclone considers to be the root + of your Enterprise File Fabric. -- Config: version -- Env Var: RCLONE_FILEFABRIC_VERSION -- Type: string -- Required: false + Normally you will leave this blank and rclone will determine the + correct root to use itself. ---filefabric-encoding + However you can set this to restrict rclone to a specific folder + hierarchy. -The encoding for the backend. + In order to do this you will have to find the `Folder ID` of the + directory you wish rclone to display. These aren't displayed in the + web interface, but you can use `rclone lsf` to find them, for example -See the encoding section in the overview for more info. +$ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ +120673759,My Quick Uploads/ 120673755,My Syncs/ 120673756,My backups/ +120673757,My contacts/ 120673761,S3 Storage/ -Properties: -- Config: encoding -- Env Var: RCLONE_FILEFABRIC_ENCODING -- Type: MultiEncoder -- Default: Slash,Del,Ctl,InvalidUtf8,Dot + The ID for "S3 Storage" would be `120673761`. -FTP -FTP is the File Transfer Protocol. Rclone FTP support is provided using -the github.com/jlaffaye/ftp package. + ### Standard options -Limitations of Rclone's FTP backend + Here are the Standard options specific to filefabric (Enterprise File Fabric). -Paths are specified as remote:path. If the path does not begin with a / -it is relative to the home directory of the user. An empty path remote: -refers to the user's home directory. + #### --filefabric-url -Configuration + URL of the Enterprise File Fabric to connect to. -To create an FTP configuration named remote, run + Properties: - rclone config + - Config: url + - Env Var: RCLONE_FILEFABRIC_URL + - Type: string + - Required: true + - Examples: + - "https://storagemadeeasy.com" + - Storage Made Easy US + - "https://eu.storagemadeeasy.com" + - Storage Made Easy EU + - "https://yourfabric.smestorage.com" + - Connect to your Enterprise File Fabric -Rclone config guides you through an interactive setup process. A minimal -rclone FTP remote definition only requires host, username and password. -For an anonymous FTP server, see below. + #### --filefabric-root-folder-id - No remotes found, make a new one? - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / FTP - \ "ftp" - [snip] - Storage> ftp - ** See help for ftp backend at: https://rclone.org/ftp/ ** + ID of the root folder. - FTP host to connect to - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Connect to ftp.example.com - \ "ftp.example.com" - host> ftp.example.com - FTP username - Enter a string value. Press Enter for the default ("$USER"). - user> - FTP port number - Enter a signed integer. Press Enter for the default (21). - port> - FTP password - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Use FTP over TLS (Implicit) - Enter a boolean value (true or false). Press Enter for the default ("false"). - tls> - Use FTP over TLS (Explicit) - Enter a boolean value (true or false). Press Enter for the default ("false"). - explicit_tls> - Remote config + Leave blank normally. + + Fill in to make rclone start with directory of a given ID. + + + Properties: + + - Config: root_folder_id + - Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID + - Type: string + - Required: false + + #### --filefabric-permanent-token + + Permanent Authentication Token. + + A Permanent Authentication Token can be created in the Enterprise File + Fabric, on the users Dashboard under Security, there is an entry + you'll see called "My Authentication Tokens". Click the Manage button + to create one. + + These tokens are normally valid for several years. + + For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens + + + Properties: + + - Config: permanent_token + - Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to filefabric (Enterprise File Fabric). + + #### --filefabric-token + + Session Token. + + This is a session token which rclone caches in the config file. It is + usually valid for 1 hour. + + Don't set this value - rclone will set it automatically. + + + Properties: + + - Config: token + - Env Var: RCLONE_FILEFABRIC_TOKEN + - Type: string + - Required: false + + #### --filefabric-token-expiry + + Token expiry time. + + Don't set this value - rclone will set it automatically. + + + Properties: + + - Config: token_expiry + - Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY + - Type: string + - Required: false + + #### --filefabric-version + + Version read from the file fabric. + + Don't set this value - rclone will set it automatically. + + + Properties: + + - Config: version + - Env Var: RCLONE_FILEFABRIC_VERSION + - Type: string + - Required: false + + #### --filefabric-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_FILEFABRIC_ENCODING + - Type: MultiEncoder + - Default: Slash,Del,Ctl,InvalidUtf8,Dot + + + + # FTP + + FTP is the File Transfer Protocol. Rclone FTP support is provided using the + [github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) + package. + + [Limitations of Rclone's FTP backend](#limitations) + + Paths are specified as `remote:path`. If the path does not begin with + a `/` it is relative to the home directory of the user. An empty path + `remote:` refers to the user's home directory. + + ## Configuration + + To create an FTP configuration named `remote`, run + + rclone config + + Rclone config guides you through an interactive setup process. A minimal + rclone FTP remote definition only requires host, username and password. + For an anonymous FTP server, see [below](#anonymous-ftp). + +No remotes found, make a new one? n) New remote r) Rename remote c) Copy +remote s) Set configuration password q) Quit config n/r/c/s/q> n name> +remote Type of storage to configure. Enter a string value. Press Enter +for the default (""). Choose a number from below, or type in your own +value [snip] XX / FTP  "ftp" [snip] Storage> ftp ** See help for ftp +backend at: https://rclone.org/ftp/ ** + +FTP host to connect to Enter a string value. Press Enter for the default +(""). Choose a number from below, or type in your own value 1 / Connect +to ftp.example.com  "ftp.example.com" host> ftp.example.com FTP username +Enter a string value. Press Enter for the default ("$USER"). user> FTP +port number Enter a signed integer. Press Enter for the default (21). +port> FTP password y) Yes type in my own password g) Generate random +password y/g> y Enter the password: password: Confirm the password: +password: Use FTP over TLS (Implicit) Enter a boolean value (true or +false). Press Enter for the default ("false"). tls> Use FTP over TLS +(Explicit) Enter a boolean value (true or false). Press Enter for the +default ("false"). explicit_tls> Remote config -------------------- +[remote] type = ftp host = ftp.example.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + To see all directories in the home directory of `remote` + + rclone lsd remote: + + Make a new directory + + rclone mkdir remote:path/to/directory + + List the contents of a directory + + rclone ls remote:path/to/directory + + Sync `/home/local/directory` to the remote directory, deleting any + excess files in the directory. + + rclone sync --interactive /home/local/directory remote:directory + + ### Anonymous FTP + + When connecting to a FTP server that allows anonymous login, you can use the + special "anonymous" username. Traditionally, this user account accepts any + string as a password, although it is common to use either the password + "anonymous" or "guest". Some servers require the use of a valid e-mail + address as password. + + Using [on-the-fly](#backend-path-to-dir) or + [connection string](https://rclone.org/docs/#connection-strings) remotes makes it easy to access + such servers, without requiring any configuration in advance. The following + are examples of that: + + rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) + rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): + + The above examples work in Linux shells and in PowerShell, but not Windows + Command Prompt. They execute the [rclone obscure](https://rclone.org/commands/rclone_obscure/) + command to create a password string in the format required by the + [pass](#ftp-pass) option. The following examples are exactly the same, except use + an already obscured string representation of the same password "dummy", and + therefore works even in Windows Command Prompt: + + rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM + rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: + + ### Implicit TLS + + Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to + be enabled in the FTP backend config for the remote, or with + [`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and + can be set with [`--ftp-port`](#ftp-port). + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + File names cannot end with the following characters. Replacement is + limited to the last character in a file name: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + + Not all FTP servers can have all characters in file names, for example: + + | FTP Server| Forbidden characters | + | --------- |:--------------------:| + | proftpd | `*` | + | pureftpd | `\ [ ]` | + + This backend's interactive configuration wizard provides a selection of + sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. + Just hit a selection number when prompted. + + + ### Standard options + + Here are the Standard options specific to ftp (FTP). + + #### --ftp-host + + FTP host to connect to. + + E.g. "ftp.example.com". + + Properties: + + - Config: host + - Env Var: RCLONE_FTP_HOST + - Type: string + - Required: true + + #### --ftp-user + + FTP username. + + Properties: + + - Config: user + - Env Var: RCLONE_FTP_USER + - Type: string + - Default: "$USER" + + #### --ftp-port + + FTP port number. + + Properties: + + - Config: port + - Env Var: RCLONE_FTP_PORT + - Type: int + - Default: 21 + + #### --ftp-pass + + FTP password. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: pass + - Env Var: RCLONE_FTP_PASS + - Type: string + - Required: false + + #### --ftp-tls + + Use Implicit FTPS (FTP over TLS). + + When using implicit FTP over TLS the client connects using TLS + right from the start which breaks compatibility with + non-TLS-aware servers. This is usually served over port 990 rather + than port 21. Cannot be used in combination with explicit FTPS. + + Properties: + + - Config: tls + - Env Var: RCLONE_FTP_TLS + - Type: bool + - Default: false + + #### --ftp-explicit-tls + + Use Explicit FTPS (FTP over TLS). + + When using explicit FTP over TLS the client explicitly requests + security from the server in order to upgrade a plain text connection + to an encrypted one. Cannot be used in combination with implicit FTPS. + + Properties: + + - Config: explicit_tls + - Env Var: RCLONE_FTP_EXPLICIT_TLS + - Type: bool + - Default: false + + ### Advanced options + + Here are the Advanced options specific to ftp (FTP). + + #### --ftp-concurrency + + Maximum number of FTP simultaneous connections, 0 for unlimited. + + Note that setting this is very likely to cause deadlocks so it should + be used with care. + + If you are doing a sync or copy then make sure concurrency is one more + than the sum of `--transfers` and `--checkers`. + + If you use `--check-first` then it just needs to be one more than the + maximum of `--checkers` and `--transfers`. + + So for `concurrency 3` you'd use `--checkers 2 --transfers 2 + --check-first` or `--checkers 1 --transfers 1`. + + + + Properties: + + - Config: concurrency + - Env Var: RCLONE_FTP_CONCURRENCY + - Type: int + - Default: 0 + + #### --ftp-no-check-certificate + + Do not verify the TLS certificate of the server. + + Properties: + + - Config: no_check_certificate + - Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE + - Type: bool + - Default: false + + #### --ftp-disable-epsv + + Disable using EPSV even if server advertises support. + + Properties: + + - Config: disable_epsv + - Env Var: RCLONE_FTP_DISABLE_EPSV + - Type: bool + - Default: false + + #### --ftp-disable-mlsd + + Disable using MLSD even if server advertises support. + + Properties: + + - Config: disable_mlsd + - Env Var: RCLONE_FTP_DISABLE_MLSD + - Type: bool + - Default: false + + #### --ftp-disable-utf8 + + Disable using UTF-8 even if server advertises support. + + Properties: + + - Config: disable_utf8 + - Env Var: RCLONE_FTP_DISABLE_UTF8 + - Type: bool + - Default: false + + #### --ftp-writing-mdtm + + Use MDTM to set modification time (VsFtpd quirk) + + Properties: + + - Config: writing_mdtm + - Env Var: RCLONE_FTP_WRITING_MDTM + - Type: bool + - Default: false + + #### --ftp-force-list-hidden + + Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD. + + Properties: + + - Config: force_list_hidden + - Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN + - Type: bool + - Default: false + + #### --ftp-idle-timeout + + Max time before closing idle connections. + + If no connections have been returned to the connection pool in the time + given, rclone will empty the connection pool. + + Set to 0 to keep connections indefinitely. + + + Properties: + + - Config: idle_timeout + - Env Var: RCLONE_FTP_IDLE_TIMEOUT + - Type: Duration + - Default: 1m0s + + #### --ftp-close-timeout + + Maximum time to wait for a response to close. + + Properties: + + - Config: close_timeout + - Env Var: RCLONE_FTP_CLOSE_TIMEOUT + - Type: Duration + - Default: 1m0s + + #### --ftp-tls-cache-size + + Size of TLS session cache for all control and data connections. + + TLS cache allows to resume TLS sessions and reuse PSK between connections. + Increase if default size is not enough resulting in TLS resumption errors. + Enabled by default. Use 0 to disable. + + Properties: + + - Config: tls_cache_size + - Env Var: RCLONE_FTP_TLS_CACHE_SIZE + - Type: int + - Default: 32 + + #### --ftp-disable-tls13 + + Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + + Properties: + + - Config: disable_tls13 + - Env Var: RCLONE_FTP_DISABLE_TLS13 + - Type: bool + - Default: false + + #### --ftp-shut-timeout + + Maximum time to wait for data connection closing status. + + Properties: + + - Config: shut_timeout + - Env Var: RCLONE_FTP_SHUT_TIMEOUT + - Type: Duration + - Default: 1m0s + + #### --ftp-ask-password + + Allow asking for FTP password when needed. + + If this is set and no password is supplied then rclone will ask for a password + + + Properties: + + - Config: ask_password + - Env Var: RCLONE_FTP_ASK_PASSWORD + - Type: bool + - Default: false + + #### --ftp-socks-proxy + + Socks 5 proxy host. + + Supports the format user:pass@host:port, user@host:port, host:port. + + Example: + + myUser:myPass@localhost:9005 + + + Properties: + + - Config: socks_proxy + - Env Var: RCLONE_FTP_SOCKS_PROXY + - Type: string + - Required: false + + #### --ftp-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_FTP_ENCODING + - Type: MultiEncoder + - Default: Slash,Del,Ctl,RightSpace,Dot + - Examples: + - "Asterisk,Ctl,Dot,Slash" + - ProFTPd can't handle '*' in file names + - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" + - PureFTPd can't handle '[]' or '*' in file names + - "Ctl,LeftPeriod,Slash" + - VsFTPd can't handle file names starting with dot + + + + ## Limitations + + FTP servers acting as rclone remotes must support `passive` mode. + The mode cannot be configured as `passive` is the only supported one. + Rclone's FTP implementation is not compatible with `active` mode + as [the library it uses doesn't support it](https://github.com/jlaffaye/ftp/issues/29). + This will likely never be supported due to security concerns. + + Rclone's FTP backend does not support any checksums but can compare + file sizes. + + `rclone about` is not supported by the FTP backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. + + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + The implementation of : `--dump headers`, + `--dump bodies`, `--dump auth` for debugging isn't the same as + for rclone HTTP based backends - it has less fine grained control. + + `--timeout` isn't supported (but `--contimeout` is). + + `--bind` isn't supported. + + Rclone's FTP backend could support server-side move but does not + at present. + + The `ftp_proxy` environment variable is not currently supported. + + #### Modified time + + File modification time (timestamps) is supported to 1 second resolution + for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server. + The `VsFTPd` server has non-standard implementation of time related protocol + commands and needs a special configuration setting: `writing_mdtm = true`. + + Support for precise file time with other FTP servers varies depending on what + protocol extensions they advertise. If all the `MLSD`, `MDTM` and `MFTM` + extensions are present, rclone will use them together to provide precise time. + Otherwise the times you see on the FTP server through rclone are those of the + last file upload. + + You can use the following command to check whether rclone can use precise time + with your FTP server: `rclone backend features your_ftp_remote:` (the trailing + colon is important). Look for the number in the line tagged by `Precision` + designating the remote time precision expressed as nanoseconds. A value of + `1000000000` means that file time precision of 1 second is available. + A value of `3153600000000000000` (or another large number) means "unsupported". + + # Google Cloud Storage + + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. + + ## Configuration + + The initial setup for google cloud storage involves getting a token from Google Cloud Storage + which you need to do in your browser. `rclone config` walks you + through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +n) New remote +o) Delete remote +p) Quit config e/n/d/q> n name> remote Type of storage to configure. + Choose a number from below, or type in your own value [snip] XX / + Google Cloud Storage (this is not Google Drive)  "google cloud + storage" [snip] Storage> google cloud storage Google Application + Client Id - leave blank normally. client_id> Google Application + Client Secret - leave blank normally. client_secret> Project number + optional - needed only for list/create/delete buckets - see your + developer console. project_number> 12345678 Service Account + Credentials JSON file path - needed only if you want use SA instead + of interactive login. service_account_file> Access Control List for + new objects. Choose a number from below, or type in your own value 1 + / Object owner gets OWNER access, and all Authenticated Users get + READER access.  "authenticatedRead" 2 / Object owner gets OWNER + access, and project team owners get OWNER access. +  "bucketOwnerFullControl" 3 / Object owner gets OWNER access, and + project team owners get READER access.  "bucketOwnerRead" 4 / Object + owner gets OWNER access [default if left blank].  "private" 5 / + Object owner gets OWNER access, and project team members get access + according to their roles.  "projectPrivate" 6 / Object owner gets + OWNER access, and all Users get READER access.  "publicRead" + object_acl> 4 Access Control List for new buckets. Choose a number + from below, or type in your own value 1 / Project team owners get + OWNER access, and all Authenticated Users get READER access. +  "authenticatedRead" 2 / Project team owners get OWNER access + [default if left blank].  "private" 3 / Project team members get + access according to their roles.  "projectPrivate" 4 / Project team + owners get OWNER access, and all Users get READER access. +  "publicRead" 5 / Project team owners get OWNER access, and all + Users get WRITER access.  "publicReadWrite" bucket_acl> 2 Location + for the newly created buckets. Choose a number from below, or type + in your own value 1 / Empty for default location (US).  "" 2 / + Multi-regional location for Asia.  "asia" 3 / Multi-regional + location for Europe.  "eu" 4 / Multi-regional location for United + States.  "us" 5 / Taiwan.  "asia-east1" 6 / Tokyo. +  "asia-northeast1" 7 / Singapore.  "asia-southeast1" 8 / Sydney. +  "australia-southeast1" 9 / Belgium.  "europe-west1" 10 / London. +  "europe-west2" 11 / Iowa.  "us-central1" 12 / South Carolina. +  "us-east1" 13 / Northern Virginia.  "us-east4" 14 / Oregon. +  "us-west1" location> 12 The storage class to use when storing + objects in Google Cloud Storage. Choose a number from below, or type + in your own value 1 / Default  "" 2 / Multi-regional storage class +  "MULTI_REGIONAL" 3 / Regional storage class  "REGIONAL" 4 / + Nearline storage class  "NEARLINE" 5 / Coldline storage class +  "COLDLINE" 6 / Durable reduced availability storage class +  "DURABLE_REDUCED_AVAILABILITY" storage_class> 5 Remote config Use + web browser to automatically authenticate rclone with remote? + +- Say Y if the machine running rclone has a web browser you can use +- Say N if running rclone on a (remote) machine without web browser + access If not sure try Y. If Y failed, try N. + +y) Yes +z) No y/n> y If your browser doesn't open automatically go to the + following link: http://127.0.0.1:53682/auth Log in and authorize + rclone for access Waiting for code... Got code -------------------- + [remote] type = google cloud storage client_id = client_secret = + token = + {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} + project_number = 12345678 object_acl = private bucket_acl = private -------------------- - [remote] - type = ftp - host = ftp.example.com - pass = *** ENCRYPTED *** - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y +a) Yes this is OK +b) Edit this remote +c) Delete this remote y/e/d> y -To see all directories in the home directory of remote - rclone lsd remote: + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. -Make a new directory + Note that rclone runs a webserver on your local machine to collect the + token as returned from Google if using web browser to automatically + authenticate. This only + runs from the moment it opens your browser to the moment you get back + the verification code. This is on `http://127.0.0.1:53682/` and this + it may require you to unblock it temporarily if you are running a host + firewall, or use manual mode. - rclone mkdir remote:path/to/directory + This remote is called `remote` and can now be used like this -List the contents of a directory + See all the buckets in your project - rclone ls remote:path/to/directory + rclone lsd remote: -Sync /home/local/directory to the remote directory, deleting any excess -files in the directory. + Make a new bucket - rclone sync --interactive /home/local/directory remote:directory + rclone mkdir remote:bucket -Anonymous FTP + List the contents of a bucket -When connecting to a FTP server that allows anonymous login, you can use -the special "anonymous" username. Traditionally, this user account -accepts any string as a password, although it is common to use either -the password "anonymous" or "guest". Some servers require the use of a -valid e-mail address as password. + rclone ls remote:bucket -Using on-the-fly or connection string remotes makes it easy to access -such servers, without requiring any configuration in advance. The -following are examples of that: + Sync `/home/local/directory` to the remote bucket, deleting any excess + files in the bucket. - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): + rclone sync --interactive /home/local/directory remote:bucket -The above examples work in Linux shells and in PowerShell, but not -Windows Command Prompt. They execute the rclone obscure command to -create a password string in the format required by the pass option. The -following examples are exactly the same, except use an already obscured -string representation of the same password "dummy", and therefore works -even in Windows Command Prompt: + ### Service Account support - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: + You can set up rclone with Google Cloud Storage in an unattended mode, + i.e. not tied to a specific end-user Google account. This is useful + when you want to synchronise files onto machines that don't have + actively logged-in users, for example build machines. -Implicit TLS + To get credentials for Google Cloud Platform + [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), + please head to the + [Service Account](https://console.cloud.google.com/permissions/serviceaccounts) + section of the Google Developer Console. Service Accounts behave just + like normal `User` permissions in + [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), + so you can limit their access (e.g. make them read only). After + creating an account, a JSON file containing the Service Account's + credentials will be downloaded onto your machines. These credentials + are what rclone will use for authentication. -Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be -enabled in the FTP backend config for the remote, or with --ftp-tls. The -default FTPS port is 990, not 21 and can be set with --ftp-port. + To use a Service Account instead of OAuth2 token flow, enter the path + to your Service Account credentials at the `service_account_file` + prompt and rclone won't use the browser based authentication + flow. If you'd rather stuff the contents of the credentials file into + the rclone config file, you can set `service_account_credentials` with + the actual contents of the file instead, or set the equivalent + environment variable. -Restricted filename characters + ### Anonymous Access -In addition to the default restricted characters set the following -characters are also replaced: + For downloads of objects that permit public access you can configure rclone + to use anonymous access by setting `anonymous` to `true`. + With unauthorized access you can't write or create files but only read or list + those buckets and objects that have public read access. -File names cannot end with the following characters. Replacement is -limited to the last character in a file name: + ### Application Default Credentials - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ + If no other source of credentials is provided, rclone will fall back + to + [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) + this is useful both when you already have configured authentication + for your developer account, or in production when running on a google + compute host. Note that if running in docker, you may need to run + additional commands on your google compute machine - + [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). -Not all FTP servers can have all characters in file names, for example: + Note that in the case application default credentials are used, there + is no need to explicitly configure a project number. - FTP Server Forbidden characters - ------------ ---------------------- - proftpd * - pureftpd \ [ ] + ### --fast-list -This backend's interactive configuration wizard provides a selection of -sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, -VsFTPd. Just hit a selection number when prompted. + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. -Standard options + ### Custom upload headers -Here are the Standard options specific to ftp (FTP). + You can set custom upload headers with the `--header-upload` + flag. Google Cloud Storage supports the headers as described in the + [working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) ---ftp-host + - Cache-Control + - Content-Disposition + - Content-Encoding + - Content-Language + - Content-Type + - X-Goog-Storage-Class + - X-Goog-Meta- -FTP host to connect to. + Eg `--header-upload "Content-Type text/potato"` -E.g. "ftp.example.com". + Note that the last of these is for setting custom metadata in the form + `--header-upload "x-goog-meta-key: value"` -Properties: + ### Modification time -- Config: host -- Env Var: RCLONE_FTP_HOST -- Type: string -- Required: true + Google Cloud Storage stores md5sum natively. + Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time + with one-second precision as `goog-reserved-file-mtime` in file metadata. ---ftp-user + To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. + `mtime` uses RFC3339 format with one-nanosecond precision. + `goog-reserved-file-mtime` uses Unix timestamp format with one-second precision. + To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time. -FTP username. + Note that rclone's default modify window is 1ns. + Files uploaded by gsutil only contain timestamps with one-second precision. + If you use rclone to sync files previously uploaded by gsutil, + rclone will attempt to update modification time for all these files. + To avoid these possibly unnecessary updates, use `--modify-window 1s`. -Properties: + ### Restricted filename characters -- Config: user -- Env Var: RCLONE_FTP_USER -- Type: string -- Default: "$USER" + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | NUL | 0x00 | ␀ | + | LF | 0x0A | ␊ | + | CR | 0x0D | ␍ | + | / | 0x2F | / | ---ftp-port + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -FTP port number. -Properties: + ### Standard options -- Config: port -- Env Var: RCLONE_FTP_PORT -- Type: int -- Default: 21 + Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). ---ftp-pass + #### --gcs-client-id -FTP password. + OAuth Client Id. -NB Input to this must be obscured - see rclone obscure. + Leave blank normally. -Properties: + Properties: -- Config: pass -- Env Var: RCLONE_FTP_PASS -- Type: string -- Required: false + - Config: client_id + - Env Var: RCLONE_GCS_CLIENT_ID + - Type: string + - Required: false ---ftp-tls + #### --gcs-client-secret -Use Implicit FTPS (FTP over TLS). + OAuth Client Secret. -When using implicit FTP over TLS the client connects using TLS right -from the start which breaks compatibility with non-TLS-aware servers. -This is usually served over port 990 rather than port 21. Cannot be used -in combination with explicit FTPS. + Leave blank normally. -Properties: + Properties: -- Config: tls -- Env Var: RCLONE_FTP_TLS -- Type: bool -- Default: false + - Config: client_secret + - Env Var: RCLONE_GCS_CLIENT_SECRET + - Type: string + - Required: false ---ftp-explicit-tls + #### --gcs-project-number -Use Explicit FTPS (FTP over TLS). + Project number. -When using explicit FTP over TLS the client explicitly requests security -from the server in order to upgrade a plain text connection to an -encrypted one. Cannot be used in combination with implicit FTPS. + Optional - needed only for list/create/delete buckets - see your developer console. -Properties: + Properties: -- Config: explicit_tls -- Env Var: RCLONE_FTP_EXPLICIT_TLS -- Type: bool -- Default: false + - Config: project_number + - Env Var: RCLONE_GCS_PROJECT_NUMBER + - Type: string + - Required: false -Advanced options + #### --gcs-user-project -Here are the Advanced options specific to ftp (FTP). + User project. ---ftp-concurrency + Optional - needed only for requester pays. -Maximum number of FTP simultaneous connections, 0 for unlimited. + Properties: -Note that setting this is very likely to cause deadlocks so it should be -used with care. + - Config: user_project + - Env Var: RCLONE_GCS_USER_PROJECT + - Type: string + - Required: false -If you are doing a sync or copy then make sure concurrency is one more -than the sum of --transfers and --checkers. + #### --gcs-service-account-file -If you use --check-first then it just needs to be one more than the -maximum of --checkers and --transfers. + Service Account Credentials JSON file path. -So for concurrency 3 you'd use --checkers 2 --transfers 2 --check-first -or --checkers 1 --transfers 1. + Leave blank normally. + Needed only if you want use SA instead of interactive login. -Properties: + Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. -- Config: concurrency -- Env Var: RCLONE_FTP_CONCURRENCY -- Type: int -- Default: 0 + Properties: ---ftp-no-check-certificate + - Config: service_account_file + - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE + - Type: string + - Required: false -Do not verify the TLS certificate of the server. + #### --gcs-service-account-credentials -Properties: + Service Account Credentials JSON blob. -- Config: no_check_certificate -- Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE -- Type: bool -- Default: false + Leave blank normally. + Needed only if you want use SA instead of interactive login. ---ftp-disable-epsv + Properties: -Disable using EPSV even if server advertises support. + - Config: service_account_credentials + - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS + - Type: string + - Required: false -Properties: + #### --gcs-anonymous -- Config: disable_epsv -- Env Var: RCLONE_FTP_DISABLE_EPSV -- Type: bool -- Default: false + Access public buckets and objects without credentials. ---ftp-disable-mlsd + Set to 'true' if you just want to download files and don't configure credentials. -Disable using MLSD even if server advertises support. + Properties: -Properties: + - Config: anonymous + - Env Var: RCLONE_GCS_ANONYMOUS + - Type: bool + - Default: false -- Config: disable_mlsd -- Env Var: RCLONE_FTP_DISABLE_MLSD -- Type: bool -- Default: false + #### --gcs-object-acl ---ftp-disable-utf8 - -Disable using UTF-8 even if server advertises support. - -Properties: - -- Config: disable_utf8 -- Env Var: RCLONE_FTP_DISABLE_UTF8 -- Type: bool -- Default: false - ---ftp-writing-mdtm - -Use MDTM to set modification time (VsFtpd quirk) - -Properties: - -- Config: writing_mdtm -- Env Var: RCLONE_FTP_WRITING_MDTM -- Type: bool -- Default: false - ---ftp-force-list-hidden - -Use LIST -a to force listing of hidden files and folders. This will -disable the use of MLSD. - -Properties: - -- Config: force_list_hidden -- Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN -- Type: bool -- Default: false - ---ftp-idle-timeout - -Max time before closing idle connections. - -If no connections have been returned to the connection pool in the time -given, rclone will empty the connection pool. - -Set to 0 to keep connections indefinitely. - -Properties: - -- Config: idle_timeout -- Env Var: RCLONE_FTP_IDLE_TIMEOUT -- Type: Duration -- Default: 1m0s - ---ftp-close-timeout - -Maximum time to wait for a response to close. - -Properties: - -- Config: close_timeout -- Env Var: RCLONE_FTP_CLOSE_TIMEOUT -- Type: Duration -- Default: 1m0s - ---ftp-tls-cache-size - -Size of TLS session cache for all control and data connections. - -TLS cache allows to resume TLS sessions and reuse PSK between -connections. Increase if default size is not enough resulting in TLS -resumption errors. Enabled by default. Use 0 to disable. - -Properties: - -- Config: tls_cache_size -- Env Var: RCLONE_FTP_TLS_CACHE_SIZE -- Type: int -- Default: 32 - ---ftp-disable-tls13 - -Disable TLS 1.3 (workaround for FTP servers with buggy TLS) - -Properties: - -- Config: disable_tls13 -- Env Var: RCLONE_FTP_DISABLE_TLS13 -- Type: bool -- Default: false - ---ftp-shut-timeout - -Maximum time to wait for data connection closing status. - -Properties: - -- Config: shut_timeout -- Env Var: RCLONE_FTP_SHUT_TIMEOUT -- Type: Duration -- Default: 1m0s - ---ftp-ask-password - -Allow asking for FTP password when needed. - -If this is set and no password is supplied then rclone will ask for a -password - -Properties: - -- Config: ask_password -- Env Var: RCLONE_FTP_ASK_PASSWORD -- Type: bool -- Default: false - ---ftp-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_FTP_ENCODING -- Type: MultiEncoder -- Default: Slash,Del,Ctl,RightSpace,Dot -- Examples: - - "Asterisk,Ctl,Dot,Slash" - - ProFTPd can't handle '*' in file names - - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" - - PureFTPd can't handle '[]' or '*' in file names - - "Ctl,LeftPeriod,Slash" - - VsFTPd can't handle file names starting with dot - -Limitations - -FTP servers acting as rclone remotes must support passive mode. The mode -cannot be configured as passive is the only supported one. Rclone's FTP -implementation is not compatible with active mode as the library it uses -doesn't support it. This will likely never be supported due to security -concerns. - -Rclone's FTP backend does not support any checksums but can compare file -sizes. - -rclone about is not supported by the FTP backend. Backends without this -capability cannot determine free space for an rclone mount or use policy -mfs (most free space) as a member of an rclone union remote. - -See List of backends that do not support rclone about and rclone about - -The implementation of : --dump headers, --dump bodies, --dump auth for -debugging isn't the same as for rclone HTTP based backends - it has less -fine grained control. - ---timeout isn't supported (but --contimeout is). - ---bind isn't supported. - -Rclone's FTP backend could support server-side move but does not at -present. - -The ftp_proxy environment variable is not currently supported. - -Modified time - -File modification time (timestamps) is supported to 1 second resolution -for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP -server. The VsFTPd server has non-standard implementation of time -related protocol commands and needs a special configuration setting: -writing_mdtm = true. - -Support for precise file time with other FTP servers varies depending on -what protocol extensions they advertise. If all the MLSD, MDTM and MFTM -extensions are present, rclone will use them together to provide precise -time. Otherwise the times you see on the FTP server through rclone are -those of the last file upload. - -You can use the following command to check whether rclone can use -precise time with your FTP server: -rclone backend features your_ftp_remote: (the trailing colon is -important). Look for the number in the line tagged by Precision -designating the remote time precision expressed as nanoseconds. A value -of 1000000000 means that file time precision of 1 second is available. A -value of 3153600000000000000 (or another large number) means -"unsupported". - -Google Cloud Storage - -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, e.g. remote:bucket/path/to/dir. - -Configuration - -The initial setup for google cloud storage involves getting a token from -Google Cloud Storage which you need to do in your browser. rclone config -walks you through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - [snip] - Storage> google cloud storage - Google Application Client Id - leave blank normally. - client_id> - Google Application Client Secret - leave blank normally. - client_secret> - Project number optional - needed only for list/create/delete buckets - see your developer console. - project_number> 12345678 - Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. - service_account_file> Access Control List for new objects. - Choose a number from below, or type in your own value - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Object owner gets OWNER access, and project team owners get OWNER access. - \ "bucketOwnerFullControl" - 3 / Object owner gets OWNER access, and project team owners get READER access. - \ "bucketOwnerRead" - 4 / Object owner gets OWNER access [default if left blank]. - \ "private" - 5 / Object owner gets OWNER access, and project team members get access according to their roles. - \ "projectPrivate" - 6 / Object owner gets OWNER access, and all Users get READER access. - \ "publicRead" - object_acl> 4 + + Properties: + + - Config: object_acl + - Env Var: RCLONE_GCS_OBJECT_ACL + - Type: string + - Required: false + - Examples: + - "authenticatedRead" + - Object owner gets OWNER access. + - All Authenticated Users get READER access. + - "bucketOwnerFullControl" + - Object owner gets OWNER access. + - Project team owners get OWNER access. + - "bucketOwnerRead" + - Object owner gets OWNER access. + - Project team owners get READER access. + - "private" + - Object owner gets OWNER access. + - Default if left blank. + - "projectPrivate" + - Object owner gets OWNER access. + - Project team members get access according to their roles. + - "publicRead" + - Object owner gets OWNER access. + - All Users get READER access. + + #### --gcs-bucket-acl + Access Control List for new buckets. - Choose a number from below, or type in your own value - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Project team owners get OWNER access [default if left blank]. - \ "private" - 3 / Project team members get access according to their roles. - \ "projectPrivate" - 4 / Project team owners get OWNER access, and all Users get READER access. - \ "publicRead" - 5 / Project team owners get OWNER access, and all Users get WRITER access. - \ "publicReadWrite" - bucket_acl> 2 + + Properties: + + - Config: bucket_acl + - Env Var: RCLONE_GCS_BUCKET_ACL + - Type: string + - Required: false + - Examples: + - "authenticatedRead" + - Project team owners get OWNER access. + - All Authenticated Users get READER access. + - "private" + - Project team owners get OWNER access. + - Default if left blank. + - "projectPrivate" + - Project team members get access according to their roles. + - "publicRead" + - Project team owners get OWNER access. + - All Users get READER access. + - "publicReadWrite" + - Project team owners get OWNER access. + - All Users get WRITER access. + + #### --gcs-bucket-policy-only + + Access checks should use bucket-level IAM policies. + + If you want to upload objects to a bucket with Bucket Policy Only set + then you will need to set this. + + When it is set, rclone: + + - ignores ACLs set on buckets + - ignores ACLs set on objects + - creates buckets with Bucket Policy Only set + + Docs: https://cloud.google.com/storage/docs/bucket-policy-only + + + Properties: + + - Config: bucket_policy_only + - Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY + - Type: bool + - Default: false + + #### --gcs-location + Location for the newly created buckets. - Choose a number from below, or type in your own value - 1 / Empty for default location (US). - \ "" - 2 / Multi-regional location for Asia. - \ "asia" - 3 / Multi-regional location for Europe. - \ "eu" - 4 / Multi-regional location for United States. - \ "us" - 5 / Taiwan. - \ "asia-east1" - 6 / Tokyo. - \ "asia-northeast1" - 7 / Singapore. - \ "asia-southeast1" - 8 / Sydney. - \ "australia-southeast1" - 9 / Belgium. - \ "europe-west1" - 10 / London. - \ "europe-west2" - 11 / Iowa. - \ "us-central1" - 12 / South Carolina. - \ "us-east1" - 13 / Northern Virginia. - \ "us-east4" - 14 / Oregon. - \ "us-west1" - location> 12 + + Properties: + + - Config: location + - Env Var: RCLONE_GCS_LOCATION + - Type: string + - Required: false + - Examples: + - "" + - Empty for default location (US) + - "asia" + - Multi-regional location for Asia + - "eu" + - Multi-regional location for Europe + - "us" + - Multi-regional location for United States + - "asia-east1" + - Taiwan + - "asia-east2" + - Hong Kong + - "asia-northeast1" + - Tokyo + - "asia-northeast2" + - Osaka + - "asia-northeast3" + - Seoul + - "asia-south1" + - Mumbai + - "asia-south2" + - Delhi + - "asia-southeast1" + - Singapore + - "asia-southeast2" + - Jakarta + - "australia-southeast1" + - Sydney + - "australia-southeast2" + - Melbourne + - "europe-north1" + - Finland + - "europe-west1" + - Belgium + - "europe-west2" + - London + - "europe-west3" + - Frankfurt + - "europe-west4" + - Netherlands + - "europe-west6" + - Zürich + - "europe-central2" + - Warsaw + - "us-central1" + - Iowa + - "us-east1" + - South Carolina + - "us-east4" + - Northern Virginia + - "us-west1" + - Oregon + - "us-west2" + - California + - "us-west3" + - Salt Lake City + - "us-west4" + - Las Vegas + - "northamerica-northeast1" + - Montréal + - "northamerica-northeast2" + - Toronto + - "southamerica-east1" + - São Paulo + - "southamerica-west1" + - Santiago + - "asia1" + - Dual region: asia-northeast1 and asia-northeast2. + - "eur4" + - Dual region: europe-north1 and europe-west4. + - "nam4" + - Dual region: us-central1 and us-east1. + + #### --gcs-storage-class + The storage class to use when storing objects in Google Cloud Storage. - Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Multi-regional storage class - \ "MULTI_REGIONAL" - 3 / Regional storage class - \ "REGIONAL" - 4 / Nearline storage class - \ "NEARLINE" - 5 / Coldline storage class - \ "COLDLINE" - 6 / Durable reduced availability storage class - \ "DURABLE_REDUCED_AVAILABILITY" - storage_class> 5 - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - type = google cloud storage - client_id = - client_secret = - token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} - project_number = 12345678 - object_acl = private - bucket_acl = private - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically -authenticate. This only runs from the moment it opens your browser to -the moment you get back the verification code. This is on -http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall, or use manual mode. - -This remote is called remote and can now be used like this - -See all the buckets in your project - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync /home/local/directory to the remote bucket, deleting any excess -files in the bucket. - - rclone sync --interactive /home/local/directory remote:bucket - -Service Account support - -You can set up rclone with Google Cloud Storage in an unattended mode, -i.e. not tied to a specific end-user Google account. This is useful when -you want to synchronise files onto machines that don't have actively -logged-in users, for example build machines. - -To get credentials for Google Cloud Platform IAM Service Accounts, -please head to the Service Account section of the Google Developer -Console. Service Accounts behave just like normal User permissions in -Google Cloud Storage ACLs, so you can limit their access (e.g. make them -read only). After creating an account, a JSON file containing the -Service Account's credentials will be downloaded onto your machines. -These credentials are what rclone will use for authentication. - -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the service_account_file prompt and -rclone won't use the browser based authentication flow. If you'd rather -stuff the contents of the credentials file into the rclone config file, -you can set service_account_credentials with the actual contents of the -file instead, or set the equivalent environment variable. - -Anonymous Access - -For downloads of objects that permit public access you can configure -rclone to use anonymous access by setting anonymous to true. With -unauthorized access you can't write or create files but only read or -list those buckets and objects that have public read access. - -Application Default Credentials - -If no other source of credentials is provided, rclone will fall back to -Application Default Credentials this is useful both when you already -have configured authentication for your developer account, or in -production when running on a google compute host. Note that if running -in docker, you may need to run additional commands on your google -compute machine - see this page. -Note that in the case application default credentials are used, there is -no need to explicitly configure a project number. + Properties: ---fast-list + - Config: storage_class + - Env Var: RCLONE_GCS_STORAGE_CLASS + - Type: string + - Required: false + - Examples: + - "" + - Default + - "MULTI_REGIONAL" + - Multi-regional storage class + - "REGIONAL" + - Regional storage class + - "NEARLINE" + - Nearline storage class + - "COLDLINE" + - Coldline storage class + - "ARCHIVE" + - Archive storage class + - "DURABLE_REDUCED_AVAILABILITY" + - Durable reduced availability storage class -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. + #### --gcs-env-auth -Custom upload headers + Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars). -You can set custom upload headers with the --header-upload flag. Google -Cloud Storage supports the headers as described in the working with -metadata documentation + Only applies if service_account_file and service_account_credentials is blank. -- Cache-Control -- Content-Disposition -- Content-Encoding -- Content-Language -- Content-Type -- X-Goog-Storage-Class -- X-Goog-Meta- + Properties: -Eg --header-upload "Content-Type text/potato" + - Config: env_auth + - Env Var: RCLONE_GCS_ENV_AUTH + - Type: bool + - Default: false + - Examples: + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). -Note that the last of these is for setting custom metadata in the form ---header-upload "x-goog-meta-key: value" + ### Advanced options -Modification time + Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). -Google Cloud Storage stores md5sum natively. Google's gsutil tool stores -modification time with one-second precision as goog-reserved-file-mtime -in file metadata. + #### --gcs-token -To ensure compatibility with gsutil, rclone stores modification time in -2 separate metadata entries. mtime uses RFC3339 format with -one-nanosecond precision. goog-reserved-file-mtime uses Unix timestamp -format with one-second precision. To get modification time from object -metadata, rclone reads the metadata in the following order: mtime, -goog-reserved-file-mtime, object updated time. + OAuth Access Token as a JSON blob. -Note that rclone's default modify window is 1ns. Files uploaded by -gsutil only contain timestamps with one-second precision. If you use -rclone to sync files previously uploaded by gsutil, rclone will attempt -to update modification time for all these files. To avoid these possibly -unnecessary updates, use --modify-window 1s. + Properties: -Restricted filename characters + - Config: token + - Env Var: RCLONE_GCS_TOKEN + - Type: string + - Required: false - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - LF 0x0A ␊ - CR 0x0D ␍ - / 0x2F / + #### --gcs-auth-url -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + Auth server URL. -Standard options + Leave blank to use the provider defaults. -Here are the Standard options specific to google cloud storage (Google -Cloud Storage (this is not Google Drive)). + Properties: ---gcs-client-id + - Config: auth_url + - Env Var: RCLONE_GCS_AUTH_URL + - Type: string + - Required: false -OAuth Client Id. + #### --gcs-token-url -Leave blank normally. + Token server url. -Properties: + Leave blank to use the provider defaults. -- Config: client_id -- Env Var: RCLONE_GCS_CLIENT_ID -- Type: string -- Required: false + Properties: ---gcs-client-secret + - Config: token_url + - Env Var: RCLONE_GCS_TOKEN_URL + - Type: string + - Required: false -OAuth Client Secret. + #### --gcs-directory-markers -Leave blank normally. + Upload an empty object with a trailing slash when a new directory is created -Properties: + Empty folders are unsupported for bucket based remotes, this option creates an empty + object ending with "/", to persist the folder. -- Config: client_secret -- Env Var: RCLONE_GCS_CLIENT_SECRET -- Type: string -- Required: false ---gcs-project-number + Properties: -Project number. + - Config: directory_markers + - Env Var: RCLONE_GCS_DIRECTORY_MARKERS + - Type: bool + - Default: false -Optional - needed only for list/create/delete buckets - see your -developer console. + #### --gcs-no-check-bucket -Properties: + If set, don't attempt to check the bucket exists or create it. -- Config: project_number -- Env Var: RCLONE_GCS_PROJECT_NUMBER -- Type: string -- Required: false + This can be useful when trying to minimise the number of transactions + rclone does if you know the bucket exists already. ---gcs-user-project -User project. + Properties: -Optional - needed only for requester pays. + - Config: no_check_bucket + - Env Var: RCLONE_GCS_NO_CHECK_BUCKET + - Type: bool + - Default: false -Properties: + #### --gcs-decompress -- Config: user_project -- Env Var: RCLONE_GCS_USER_PROJECT -- Type: string -- Required: false + If set this will decompress gzip encoded objects. ---gcs-service-account-file + It is possible to upload objects to GCS with "Content-Encoding: gzip" + set. Normally rclone will download these files as compressed objects. -Service Account Credentials JSON file path. + If this flag is set then rclone will decompress these files with + "Content-Encoding: gzip" as they are received. This means that rclone + can't check the size and hash but the file contents will be decompressed. -Leave blank normally. Needed only if you want use SA instead of -interactive login. -Leading ~ will be expanded in the file name as will environment -variables such as ${RCLONE_CONFIG_DIR}. + Properties: -Properties: + - Config: decompress + - Env Var: RCLONE_GCS_DECOMPRESS + - Type: bool + - Default: false -- Config: service_account_file -- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE -- Type: string -- Required: false + #### --gcs-endpoint ---gcs-service-account-credentials + Endpoint for the service. -Service Account Credentials JSON blob. + Leave blank normally. -Leave blank normally. Needed only if you want use SA instead of -interactive login. + Properties: -Properties: + - Config: endpoint + - Env Var: RCLONE_GCS_ENDPOINT + - Type: string + - Required: false -- Config: service_account_credentials -- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS -- Type: string -- Required: false + #### --gcs-encoding ---gcs-anonymous + The encoding for the backend. -Access public buckets and objects without credentials. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Set to 'true' if you just want to download files and don't configure -credentials. + Properties: -Properties: + - Config: encoding + - Env Var: RCLONE_GCS_ENCODING + - Type: MultiEncoder + - Default: Slash,CrLf,InvalidUtf8,Dot -- Config: anonymous -- Env Var: RCLONE_GCS_ANONYMOUS -- Type: bool -- Default: false ---gcs-object-acl -Access Control List for new objects. + ## Limitations -Properties: + `rclone about` is not supported by the Google Cloud Storage backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. -- Config: object_acl -- Env Var: RCLONE_GCS_OBJECT_ACL -- Type: string -- Required: false -- Examples: - - "authenticatedRead" - - Object owner gets OWNER access. - - All Authenticated Users get READER access. - - "bucketOwnerFullControl" - - Object owner gets OWNER access. - - Project team owners get OWNER access. - - "bucketOwnerRead" - - Object owner gets OWNER access. - - Project team owners get READER access. - - "private" - - Object owner gets OWNER access. - - Default if left blank. - - "projectPrivate" - - Object owner gets OWNER access. - - Project team members get access according to their roles. - - "publicRead" - - Object owner gets OWNER access. - - All Users get READER access. - ---gcs-bucket-acl - -Access Control List for new buckets. - -Properties: - -- Config: bucket_acl -- Env Var: RCLONE_GCS_BUCKET_ACL -- Type: string -- Required: false -- Examples: - - "authenticatedRead" - - Project team owners get OWNER access. - - All Authenticated Users get READER access. - - "private" - - Project team owners get OWNER access. - - Default if left blank. - - "projectPrivate" - - Project team members get access according to their roles. - - "publicRead" - - Project team owners get OWNER access. - - All Users get READER access. - - "publicReadWrite" - - Project team owners get OWNER access. - - All Users get WRITER access. - ---gcs-bucket-policy-only - -Access checks should use bucket-level IAM policies. - -If you want to upload objects to a bucket with Bucket Policy Only set -then you will need to set this. - -When it is set, rclone: - -- ignores ACLs set on buckets -- ignores ACLs set on objects -- creates buckets with Bucket Policy Only set - -Docs: https://cloud.google.com/storage/docs/bucket-policy-only - -Properties: - -- Config: bucket_policy_only -- Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY -- Type: bool -- Default: false - ---gcs-location - -Location for the newly created buckets. - -Properties: - -- Config: location -- Env Var: RCLONE_GCS_LOCATION -- Type: string -- Required: false -- Examples: - - "" - - Empty for default location (US) - - "asia" - - Multi-regional location for Asia - - "eu" - - Multi-regional location for Europe - - "us" - - Multi-regional location for United States - - "asia-east1" - - Taiwan - - "asia-east2" - - Hong Kong - - "asia-northeast1" - - Tokyo - - "asia-northeast2" - - Osaka - - "asia-northeast3" - - Seoul - - "asia-south1" - - Mumbai - - "asia-south2" - - Delhi - - "asia-southeast1" - - Singapore - - "asia-southeast2" - - Jakarta - - "australia-southeast1" - - Sydney - - "australia-southeast2" - - Melbourne - - "europe-north1" - - Finland - - "europe-west1" - - Belgium - - "europe-west2" - - London - - "europe-west3" - - Frankfurt - - "europe-west4" - - Netherlands - - "europe-west6" - - Zürich - - "europe-central2" - - Warsaw - - "us-central1" - - Iowa - - "us-east1" - - South Carolina - - "us-east4" - - Northern Virginia - - "us-west1" - - Oregon - - "us-west2" - - California - - "us-west3" - - Salt Lake City - - "us-west4" - - Las Vegas - - "northamerica-northeast1" - - Montréal - - "northamerica-northeast2" - - Toronto - - "southamerica-east1" - - São Paulo - - "southamerica-west1" - - Santiago - - "asia1" - - Dual region: asia-northeast1 and asia-northeast2. - - "eur4" - - Dual region: europe-north1 and europe-west4. - - "nam4" - - Dual region: us-central1 and us-east1. - ---gcs-storage-class - -The storage class to use when storing objects in Google Cloud Storage. - -Properties: - -- Config: storage_class -- Env Var: RCLONE_GCS_STORAGE_CLASS -- Type: string -- Required: false -- Examples: - - "" - - Default - - "MULTI_REGIONAL" - - Multi-regional storage class - - "REGIONAL" - - Regional storage class - - "NEARLINE" - - Nearline storage class - - "COLDLINE" - - Coldline storage class - - "ARCHIVE" - - Archive storage class - - "DURABLE_REDUCED_AVAILABILITY" - - Durable reduced availability storage class - ---gcs-env-auth - -Get GCP IAM credentials from runtime (environment variables or instance -meta data if no env vars). - -Only applies if service_account_file and service_account_credentials is -blank. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_GCS_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or - IAM). - -Advanced options - -Here are the Advanced options specific to google cloud storage (Google -Cloud Storage (this is not Google Drive)). - ---gcs-token - -OAuth Access Token as a JSON blob. + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) -Properties: + # Google Drive -- Config: token -- Env Var: RCLONE_GCS_TOKEN -- Type: string -- Required: false + Paths are specified as `drive:path` ---gcs-auth-url + Drive paths may be as deep as required, e.g. `drive:directory/subdirectory`. -Auth server URL. + ## Configuration -Leave blank to use the provider defaults. + The initial setup for drive involves getting a token from Google drive + which you need to do in your browser. `rclone config` walks you + through it. -Properties: + Here is an example of how to make a remote called `remote`. First run: -- Config: auth_url -- Env Var: RCLONE_GCS_AUTH_URL -- Type: string -- Required: false + rclone config ---gcs-token-url + This will guide you through an interactive setup process: -Token server url. +No remotes found, make a new one? n) New remote r) Rename remote c) Copy +remote s) Set configuration password q) Quit config n/r/c/s/q> n name> +remote Type of storage to configure. Choose a number from below, or type +in your own value [snip] XX / Google Drive  "drive" [snip] Storage> +drive Google Application Client Id - leave blank normally. client_id> +Google Application Client Secret - leave blank normally. client_secret> +Scope that rclone should use when requesting access from drive. Choose a +number from below, or type in your own value 1 / Full access all files, +excluding Application Data Folder.  "drive" 2 / Read-only access to file +metadata and file contents.  "drive.readonly" / Access to files created +by rclone only. 3 | These are visible in the drive website. | File +authorization is revoked when the user deauthorizes the app. + "drive.file" / Allows read and write access to the Application Data +folder. 4 | This is not visible in the drive website.  "drive.appfolder" +/ Allows read-only access to file metadata but 5 | does not allow any +access to read or download file content.  "drive.metadata.readonly" +scope> 1 Service Account Credentials JSON file path - needed only if you +want use SA instead of interactive login. service_account_file> Remote +config Use web browser to automatically authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your +browser doesn't open automatically go to the following link: +http://127.0.0.1:53682/auth Log in and authorize rclone for access +Waiting for code... Got code Configure this as a Shared Drive (Team +Drive)? y) Yes n) No y/n> n -------------------- [remote] client_id = +client_secret = scope = drive root_folder_id = service_account_file = +token = +{"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y -Leave blank to use the provider defaults. -Properties: + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. -- Config: token_url -- Env Var: RCLONE_GCS_TOKEN_URL -- Type: string -- Required: false + Note that rclone runs a webserver on your local machine to collect the + token as returned from Google if using web browser to automatically + authenticate. This only + runs from the moment it opens your browser to the moment you get back + the verification code. This is on `http://127.0.0.1:53682/` and it + may require you to unblock it temporarily if you are running a host + firewall, or use manual mode. ---gcs-directory-markers + You can then use it like this, -Upload an empty object with a trailing slash when a new directory is -created + List directories in top level of your drive -Empty folders are unsupported for bucket based remotes, this option -creates an empty object ending with "/", to persist the folder. + rclone lsd remote: -Properties: + List all the files in your drive -- Config: directory_markers -- Env Var: RCLONE_GCS_DIRECTORY_MARKERS -- Type: bool -- Default: false + rclone ls remote: ---gcs-no-check-bucket + To copy a local directory to a drive directory called backup -If set, don't attempt to check the bucket exists or create it. + rclone copy /home/source remote:backup -This can be useful when trying to minimise the number of transactions -rclone does if you know the bucket exists already. + ### Scopes -Properties: + Rclone allows you to select which scope you would like for rclone to + use. This changes what type of token is granted to rclone. [The + scopes are defined + here](https://developers.google.com/drive/v3/web/about-auth). -- Config: no_check_bucket -- Env Var: RCLONE_GCS_NO_CHECK_BUCKET -- Type: bool -- Default: false + The scope are ---gcs-decompress + #### drive -If set this will decompress gzip encoded objects. + This is the default scope and allows full access to all files, except + for the Application Data Folder (see below). -It is possible to upload objects to GCS with "Content-Encoding: gzip" -set. Normally rclone will download these files as compressed objects. + Choose this one if you aren't sure. -If this flag is set then rclone will decompress these files with -"Content-Encoding: gzip" as they are received. This means that rclone -can't check the size and hash but the file contents will be -decompressed. + #### drive.readonly -Properties: + This allows read only access to all files. Files may be listed and + downloaded but not uploaded, renamed or deleted. -- Config: decompress -- Env Var: RCLONE_GCS_DECOMPRESS -- Type: bool -- Default: false + #### drive.file ---gcs-endpoint + With this scope rclone can read/view/modify only those files and + folders it creates. -Endpoint for the service. + So if you uploaded files to drive via the web interface (or any other + means) they will not be visible to rclone. -Leave blank normally. + This can be useful if you are using rclone to backup data and you want + to be sure confidential data on your drive is not visible to rclone. -Properties: + Files created with this scope are visible in the web interface. -- Config: endpoint -- Env Var: RCLONE_GCS_ENDPOINT -- Type: string -- Required: false + #### drive.appfolder ---gcs-encoding + This gives rclone its own private area to store files. Rclone will + not be able to see any other files on your drive and you won't be able + to see rclone's files from the web interface either. + + #### drive.metadata.readonly + + This allows read only access to file names only. It does not allow + rclone to download or upload data, or rename or delete files or + directories. + + ### Root folder ID + + This option has been moved to the advanced section. You can set the `root_folder_id` for rclone. This is the directory + (identified by its `Folder ID`) that rclone considers to be the root + of your drive. + + Normally you will leave this blank and rclone will determine the + correct root to use itself. -The encoding for the backend. + However you can set this to restrict rclone to a specific folder + hierarchy or to access data within the "Computers" tab on the drive + web interface (where files from Google's Backup and Sync desktop + program go). -See the encoding section in the overview for more info. + In order to do this you will have to find the `Folder ID` of the + directory you wish rclone to display. This will be the last segment + of the URL when you open the relevant folder in the drive web + interface. -Properties: - -- Config: encoding -- Env Var: RCLONE_GCS_ENCODING -- Type: MultiEncoder -- Default: Slash,CrLf,InvalidUtf8,Dot + So if the folder you want rclone to use has a URL which looks like + `https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` + in the browser, then you use `1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` as + the `root_folder_id` in the config. -Limitations - -rclone about is not supported by the Google Cloud Storage backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy mfs (most free space) as a member of an -rclone union remote. - -See List of backends that do not support rclone about and rclone about - -Google Drive - -Paths are specified as drive:path - -Drive paths may be as deep as required, e.g. -drive:directory/subdirectory. - -Configuration - -The initial setup for drive involves getting a token from Google drive -which you need to do in your browser. rclone config walks you through -it. - -Here is an example of how to make a remote called remote. First run: - - rclone config + **NB** folders under the "Computers" tab seem to be read only (drive + gives a 500 error) when using rclone. -This will guide you through an interactive setup process: + There doesn't appear to be an API to discover the folder IDs of the + "Computers" tab - please contact us if you know otherwise! - No remotes found, make a new one? - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Google Drive - \ "drive" - [snip] - Storage> drive - Google Application Client Id - leave blank normally. - client_id> - Google Application Client Secret - leave blank normally. - client_secret> - Scope that rclone should use when requesting access from drive. - Choose a number from below, or type in your own value - 1 / Full access all files, excluding Application Data Folder. - \ "drive" - 2 / Read-only access to file metadata and file contents. - \ "drive.readonly" - / Access to files created by rclone only. - 3 | These are visible in the drive website. - | File authorization is revoked when the user deauthorizes the app. - \ "drive.file" - / Allows read and write access to the Application Data folder. - 4 | This is not visible in the drive website. - \ "drive.appfolder" - / Allows read-only access to file metadata but - 5 | does not allow any access to read or download file content. - \ "drive.metadata.readonly" - scope> 1 - Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. - service_account_file> - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - Configure this as a Shared Drive (Team Drive)? - y) Yes - n) No - y/n> n - -------------------- - [remote] - client_id = - client_secret = - scope = drive - root_folder_id = - service_account_file = - token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically -authenticate. This only runs from the moment it opens your browser to -the moment you get back the verification code. This is on -http://127.0.0.1:53682/ and it may require you to unblock it temporarily -if you are running a host firewall, or use manual mode. - -You can then use it like this, - -List directories in top level of your drive - - rclone lsd remote: - -List all the files in your drive - - rclone ls remote: - -To copy a local directory to a drive directory called backup - - rclone copy /home/source remote:backup - -Scopes - -Rclone allows you to select which scope you would like for rclone to -use. This changes what type of token is granted to rclone. The scopes -are defined here. - -The scope are - -drive - -This is the default scope and allows full access to all files, except -for the Application Data Folder (see below). - -Choose this one if you aren't sure. - -drive.readonly - -This allows read only access to all files. Files may be listed and -downloaded but not uploaded, renamed or deleted. - -drive.file - -With this scope rclone can read/view/modify only those files and folders -it creates. - -So if you uploaded files to drive via the web interface (or any other -means) they will not be visible to rclone. - -This can be useful if you are using rclone to backup data and you want -to be sure confidential data on your drive is not visible to rclone. - -Files created with this scope are visible in the web interface. - -drive.appfolder - -This gives rclone its own private area to store files. Rclone will not -be able to see any other files on your drive and you won't be able to -see rclone's files from the web interface either. - -drive.metadata.readonly - -This allows read only access to file names only. It does not allow -rclone to download or upload data, or rename or delete files or -directories. + Note also that rclone can't access any data under the "Backups" tab on + the google drive web interface yet. -Root folder ID + ### Service Account support -This option has been moved to the advanced section. You can set the -root_folder_id for rclone. This is the directory (identified by its -Folder ID) that rclone considers to be the root of your drive. + You can set up rclone with Google Drive in an unattended mode, + i.e. not tied to a specific end-user Google account. This is useful + when you want to synchronise files onto machines that don't have + actively logged-in users, for example build machines. -Normally you will leave this blank and rclone will determine the correct -root to use itself. + To use a Service Account instead of OAuth2 token flow, enter the path + to your Service Account credentials at the `service_account_file` + prompt during `rclone config` and rclone won't use the browser based + authentication flow. If you'd rather stuff the contents of the + credentials file into the rclone config file, you can set + `service_account_credentials` with the actual contents of the file + instead, or set the equivalent environment variable. -However you can set this to restrict rclone to a specific folder -hierarchy or to access data within the "Computers" tab on the drive web -interface (where files from Google's Backup and Sync desktop program -go). + #### Use case - Google Apps/G-suite account and individual Drive -In order to do this you will have to find the Folder ID of the directory -you wish rclone to display. This will be the last segment of the URL -when you open the relevant folder in the drive web interface. + Let's say that you are the administrator of a Google Apps (old) or + G-suite account. + The goal is to store data on an individual's Drive account, who IS + a member of the domain. + We'll call the domain **example.com**, and the user + **foo@example.com**. -So if the folder you want rclone to use has a URL which looks like -https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh -in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the -root_folder_id in the config. + There's a few steps we need to go through to accomplish this: -NB folders under the "Computers" tab seem to be read only (drive gives a -500 error) when using rclone. + ##### 1. Create a service account for example.com + - To create a service account and obtain its credentials, go to the + [Google Developer Console](https://console.developers.google.com). + - You must have a project - create one if you don't. + - Then go to "IAM & admin" -> "Service Accounts". + - Use the "Create Service Account" button. Fill in "Service account name" + and "Service account ID" with something that identifies your client. + - Select "Create And Continue". Step 2 and 3 are optional. + - These credentials are what rclone will use for authentication. + If you ever need to remove access, press the "Delete service + account key" button. -There doesn't appear to be an API to discover the folder IDs of the -"Computers" tab - please contact us if you know otherwise! + ##### 2. Allowing API access to example.com Google Drive + - Go to example.com's admin console + - Go into "Security" (or use the search bar) + - Select "Show more" and then "Advanced settings" + - Select "Manage API client access" in the "Authentication" section + - In the "Client Name" field enter the service account's + "Client ID" - this can be found in the Developer Console under + "IAM & Admin" -> "Service Accounts", then "View Client ID" for + the newly created service account. + It is a ~21 character numerical string. + - In the next field, "One or More API Scopes", enter + `https://www.googleapis.com/auth/drive` + to grant access to Google Drive specifically. -Note also that rclone can't access any data under the "Backups" tab on -the google drive web interface yet. + ##### 3. Configure rclone, assuming a new install -Service Account support +rclone config -You can set up rclone with Google Drive in an unattended mode, i.e. not -tied to a specific end-user Google account. This is useful when you want -to synchronise files onto machines that don't have actively logged-in -users, for example build machines. +n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select +the number shown for Google Drive client_id> # Can be left blank +client_secret> # Can be left blank scope> # Select your scope, 1 for +example root_folder_id> # Can be left blank service_account_file> +/home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # +Auto config, n -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the service_account_file prompt -during rclone config and rclone won't use the browser based -authentication flow. If you'd rather stuff the contents of the -credentials file into the rclone config file, you can set -service_account_credentials with the actual contents of the file -instead, or set the equivalent environment variable. -Use case - Google Apps/G-suite account and individual Drive + ##### 4. Verify that it's working + - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` + - The arguments do: + - `-v` - verbose logging + - `--drive-impersonate foo@example.com` - this is what does + the magic, pretending to be user foo. + - `lsf` - list files in a parsing friendly way + - `gdrive:backup` - use the remote called gdrive, work in + the folder named backup. -Let's say that you are the administrator of a Google Apps (old) or -G-suite account. The goal is to store data on an individual's Drive -account, who IS a member of the domain. We'll call the domain -example.com, and the user foo@example.com. - -There's a few steps we need to go through to accomplish this: - -1. Create a service account for example.com - -- To create a service account and obtain its credentials, go to the - Google Developer Console. -- You must have a project - create one if you don't. -- Then go to "IAM & admin" -> "Service Accounts". -- Use the "Create Service Account" button. Fill in "Service account - name" and "Service account ID" with something that identifies your - client. -- Select "Create And Continue". Step 2 and 3 are optional. -- These credentials are what rclone will use for authentication. If - you ever need to remove access, press the "Delete service account - key" button. - -2. Allowing API access to example.com Google Drive - -- Go to example.com's admin console -- Go into "Security" (or use the search bar) -- Select "Show more" and then "Advanced settings" -- Select "Manage API client access" in the "Authentication" section -- In the "Client Name" field enter the service account's "Client ID" - - this can be found in the Developer Console under "IAM & Admin" -> - "Service Accounts", then "View Client ID" for the newly created - service account. It is a ~21 character numerical string. -- In the next field, "One or More API Scopes", enter - https://www.googleapis.com/auth/drive to grant access to Google - Drive specifically. - -3. Configure rclone, assuming a new install - - rclone config - - n/s/q> n # New - name>gdrive # Gdrive is an example name - Storage> # Select the number shown for Google Drive - client_id> # Can be left blank - client_secret> # Can be left blank - scope> # Select your scope, 1 for example - root_folder_id> # Can be left blank - service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! - y/n> # Auto config, n - -4. Verify that it's working - -- rclone -v --drive-impersonate foo@example.com lsf gdrive:backup -- The arguments do: - - -v - verbose logging - - --drive-impersonate foo@example.com - this is what does the - magic, pretending to be user foo. - - lsf - list files in a parsing friendly way - - gdrive:backup - use the remote called gdrive, work in the folder - named backup. - -Note: in case you configured a specific root folder on gdrive and rclone -is unable to access the contents of that folder when using ---drive-impersonate, do this instead: - in the gdrive web interface, -share your root folder with the user/email of the new Service Account -you created/selected at step #1 - use rclone without specifying the ---drive-impersonate option, like this: rclone -v lsf gdrive:backup - -Shared drives (team drives) - -If you want to configure the remote to point to a Google Shared Drive -(previously known as Team Drives) then answer y to the question -Configure this as a Shared Drive (Team Drive)?. - -This will fetch the list of Shared Drives from google and allow you to -configure which one you want to use. You can also type in a Shared Drive -ID if you prefer. + Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead: + - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 + - use rclone without specifying the `--drive-impersonate` option, like this: + `rclone -v lsf gdrive:backup` -For example: - - Configure this as a Shared Drive (Team Drive)? - y) Yes - n) No - y/n> y - Fetching Shared Drive list... - Choose a number from below, or type in your own value - 1 / Rclone Test - \ "xxxxxxxxxxxxxxxxxxxx" - 2 / Rclone Test 2 - \ "yyyyyyyyyyyyyyyyyyyy" - 3 / Rclone Test 3 - \ "zzzzzzzzzzzzzzzzzzzz" - Enter a Shared Drive ID> 1 - -------------------- - [remote] - client_id = - client_secret = - token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} - team_drive = xxxxxxxxxxxxxxxxxxxx - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -It does this by combining multiple list calls into a single API request. - -This works by combining many '%s' in parents filters into one -expression. To list the contents of directories a, b and c, the -following requests will be send by the regular List function: - - trashed=false and 'a' in parents - trashed=false and 'b' in parents - trashed=false and 'c' in parents - -These can now be combined into a single request: - - trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) - -The implementation of ListR will put up to 50 parents filters into one -request. It will use the --checkers value to specify the number of -requests to run in parallel. - -In tests, these batch requests were up to 20x faster than the regular -method. Running the following command against different sized folders -gives: - - rclone lsjson -vv -R --checkers=6 gdrive:folder - -small folder (220 directories, 700 files): - -- without --fast-list: 38s -- with --fast-list: 10s - -large folder (10600 directories, 39000 files): - -- without --fast-list: 22:05 min -- with --fast-list: 58s - -Modified time -Google drive stores modification times accurate to 1 ms. + ### Shared drives (team drives) -Restricted filename characters + If you want to configure the remote to point to a Google Shared Drive + (previously known as Team Drives) then answer `y` to the question + `Configure this as a Shared Drive (Team Drive)?`. -Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON -strings. + This will fetch the list of Shared Drives from google and allow you to + configure which one you want to use. You can also type in a Shared + Drive ID if you prefer. -In contrast to other backends, / can also be used in names and . or .. -are valid names. + For example: -Revisions +Configure this as a Shared Drive (Team Drive)? y) Yes n) No y/n> y +Fetching Shared Drive list... Choose a number from below, or type in +your own value 1 / Rclone Test  "xxxxxxxxxxxxxxxxxxxx" 2 / Rclone Test 2 + "yyyyyyyyyyyyyyyyyyyy" 3 / Rclone Test 3  "zzzzzzzzzzzzzzzzzzzz" Enter +a Shared Drive ID> 1 -------------------- [remote] client_id = +client_secret = token = +{"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK +e) Edit this remote d) Delete this remote y/e/d> y -Google drive stores revisions of files. When you upload a change to an -existing file to google drive using rclone it will create a new revision -of that file. -Revisions follow the standard google policy which at time of writing was + ### --fast-list -- They are deleted after 30 days or 100 revisions (whatever comes - first). -- They do not count towards a user storage quota. + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. -Deleting files + It does this by combining multiple `list` calls into a single API request. -By default rclone will send all files to the trash when deleting files. -If deleting them permanently is required then use the ---drive-use-trash=false flag, or set the equivalent environment -variable. + This works by combining many `'%s' in parents` filters into one expression. + To list the contents of directories a, b and c, the following requests will be send by the regular `List` function: -Shortcuts +trashed=false and 'a' in parents trashed=false and 'b' in parents +trashed=false and 'c' in parents -In March 2020 Google introduced a new feature in Google Drive called -drive shortcuts (API). These will (by September 2020) replace the -ability for files or folders to be in multiple folders at once. + These can now be combined into a single request: -Shortcuts are files that link to other files on Google Drive somewhat -like a symlink in unix, except they point to the underlying file data -(e.g. the inode in unix terms) so they don't break if the source is -renamed or moved about. +trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) -By default rclone treats these as follows. -For shortcuts pointing to files: + The implementation of `ListR` will put up to 50 `parents` filters into one request. + It will use the `--checkers` value to specify the number of requests to run in parallel. -- When listing a file shortcut appears as the destination file. -- When downloading the contents of the destination file is downloaded. -- When updating shortcut file with a non shortcut file, the shortcut - is removed then a new file is uploaded in place of the shortcut. -- When server-side moving (renaming) the shortcut is renamed, not the - destination file. -- When server-side copying the shortcut is copied, not the contents of - the shortcut. (unless --drive-copy-shortcut-content is in use in - which case the contents of the shortcut gets copied). -- When deleting the shortcut is deleted not the linked file. -- When setting the modification time, the modification time of the - linked file will be set. + In tests, these batch requests were up to 20x faster than the regular method. + Running the following command against different sized folders gives: -For shortcuts pointing to folders: +rclone lsjson -vv -R --checkers=6 gdrive:folder -- When listing the shortcut appears as a folder and that folder will - contain the contents of the linked folder appear (including any sub - folders) -- When downloading the contents of the linked folder and sub contents - are downloaded -- When uploading to a shortcut folder the file will be placed in the - linked folder -- When server-side moving (renaming) the shortcut is renamed, not the - destination folder -- When server-side copying the contents of the linked folder is - copied, not the shortcut. -- When deleting with rclone rmdir or rclone purge the shortcut is - deleted not the linked folder. -- NB When deleting with rclone remove or rclone mount the contents of - the linked folder will be deleted. -The rclone backend command can be used to create shortcuts. + small folder (220 directories, 700 files): -Shortcuts can be completely ignored with the --drive-skip-shortcuts flag -or the corresponding skip_shortcuts configuration setting. + - without `--fast-list`: 38s + - with `--fast-list`: 10s -Emptying trash + large folder (10600 directories, 39000 files): -If you wish to empty your trash you can use the rclone cleanup remote: -command which will permanently delete all your trashed files. This -command does not take any path arguments. + - without `--fast-list`: 22:05 min + - with `--fast-list`: 58s -Note that Google Drive takes some time (minutes to days) to empty the -trash even though the command returns within a few seconds. No output is -echoed, so there will be no confirmation even using -v or -vv. + ### Modified time -Quota information + Google drive stores modification times accurate to 1 ms. -To view your current quota you can use the rclone about remote: command -which will display your usage limit (quota), the usage in Google Drive, -the size of all files in the Trash and the space used by other Google -services such as Gmail. This command does not take any path arguments. + ### Restricted filename characters -Import/Export of google documents + Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Google documents can be exported from and uploaded to Google Drive. + In contrast to other backends, `/` can also be used in names and `.` + or `..` are valid names. -When rclone downloads a Google doc it chooses a format to download -depending upon the --drive-export-formats setting. By default the export -formats are docx,xlsx,pptx,svg which are a sensible default for an -editable document. + ### Revisions -When choosing a format, rclone runs down the list provided in order and -chooses the first file format the doc can be exported as from the list. -If the file can't be exported to a format on the formats list, then -rclone will choose a format from the default list. + Google drive stores revisions of files. When you upload a change to + an existing file to google drive using rclone it will create a new + revision of that file. -If you prefer an archive copy then you might use ---drive-export-formats pdf, or if you prefer openoffice/libreoffice -formats you might use --drive-export-formats ods,odt,odp. + Revisions follow the standard google policy which at time of writing + was -Note that rclone adds the extension to the google doc, so if it is -called My Spreadsheet on google docs, it will be exported as -My Spreadsheet.xlsx or My Spreadsheet.pdf etc. + * They are deleted after 30 days or 100 revisions (whatever comes first). + * They do not count towards a user storage quota. -When importing files into Google Drive, rclone will convert all files -with an extension in --drive-import-formats to their associated document -type. rclone will not convert any files by default, since the conversion -is lossy process. + ### Deleting files -The conversion must result in a file with the same extension when the ---drive-export-formats rules are applied to the uploaded document. + By default rclone will send all files to the trash when deleting + files. If deleting them permanently is required then use the + `--drive-use-trash=false` flag, or set the equivalent environment + variable. -Here are some examples for allowed and prohibited conversions. + ### Shortcuts - export-formats import-formats Upload Ext Document Ext Allowed - ---------------- ---------------- ------------ -------------- --------- - odt odt odt odt Yes - odt docx,odt odt odt Yes - docx docx docx Yes - odt odt docx No - odt,docx docx,odt docx odt No - docx,odt docx,odt docx docx Yes - docx,odt docx,odt odt docx No + In March 2020 Google introduced a new feature in Google Drive called + [drive shortcuts](https://support.google.com/drive/answer/9700156) + ([API](https://developers.google.com/drive/api/v3/shortcuts)). These + will (by September 2020) [replace the ability for files or folders to + be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models). -This limitation can be disabled by specifying ---drive-allow-import-name-change. When using this flag, rclone can -convert multiple files types resulting in the same document type at -once, e.g. with --drive-import-formats docx,odt,txt, all files having -these extension would result in a document represented as a docx file. -This brings the additional risk of overwriting a document, if multiple -files have the same stem. Many rclone operations will not handle this -name change in any way. They assume an equal name when copying files and -might copy the file again or delete them when the name changes. + Shortcuts are files that link to other files on Google Drive somewhat + like a symlink in unix, except they point to the underlying file data + (e.g. the inode in unix terms) so they don't break if the source is + renamed or moved about. -Here are the possible export extensions with their corresponding mime -types. Most of these can also be used for importing, but there more that -are not listed here. Some of these additional ones might only be -available when the operating system provides the correct MIME type -entries. + By default rclone treats these as follows. -This list can be changed by Google Drive at any time and might not -represent the currently available conversions. + For shortcuts pointing to files: - -------------------------------------------------------------------------------------------------------------------------- - Extension Mime Type Description - ------------------- --------------------------------------------------------------------------- -------------------------- - bmp image/bmp Windows Bitmap format + - When listing a file shortcut appears as the destination file. + - When downloading the contents of the destination file is downloaded. + - When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. + - When server-side moving (renaming) the shortcut is renamed, not the destination file. + - When server-side copying the shortcut is copied, not the contents of the shortcut. (unless `--drive-copy-shortcut-content` is in use in which case the contents of the shortcut gets copied). + - When deleting the shortcut is deleted not the linked file. + - When setting the modification time, the modification time of the linked file will be set. - csv text/csv Standard CSV format for - Spreadsheets + For shortcuts pointing to folders: - doc application/msword Classic Word file + - When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) + - When downloading the contents of the linked folder and sub contents are downloaded + - When uploading to a shortcut folder the file will be placed in the linked folder + - When server-side moving (renaming) the shortcut is renamed, not the destination folder + - When server-side copying the contents of the linked folder is copied, not the shortcut. + - When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder. + - **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted. - docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document + The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. - epub application/epub+zip E-book format + Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag + or the corresponding `skip_shortcuts` configuration setting. - html text/html An HTML Document + ### Emptying trash - jpg image/jpeg A JPEG Image File + If you wish to empty your trash you can use the `rclone cleanup remote:` + command which will permanently delete all your trashed files. This command + does not take any path arguments. - json application/vnd.google-apps.script+json JSON Text Format for - Google Apps scripts - - odp application/vnd.oasis.opendocument.presentation Openoffice Presentation - - ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet - - ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet - - odt application/vnd.oasis.opendocument.text Openoffice Document - - pdf application/pdf Adobe PDF Format - - pjpeg image/pjpeg Progressive JPEG Image - - png image/png PNG Image Format - - pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office - Powerpoint - - rtf application/rtf Rich Text Format - - svg image/svg+xml Scalable Vector Graphics - Format - - tsv text/tab-separated-values Standard TSV format for - spreadsheets - - txt text/plain Plain Text - - wmf application/x-msmetafile Windows Meta File - - xls application/vnd.ms-excel Classic Excel file - - xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office - Spreadsheet - - zip application/zip A ZIP file of HTML, Images - CSS - -------------------------------------------------------------------------------------------------------------------------- - -Google documents can also be exported as link files. These files will -open a browser window for the Google Docs website of that document when -opened. The link file extension has to be specified as a ---drive-export-formats parameter. They will match all available Google -Documents. - - Extension Description OS Support - ----------- ----------------------------------------- ---------------- - desktop freedesktop.org specified desktop entry Linux - link.html An HTML Document with a redirect All - url INI style link file macOS, Windows - webloc macOS specific XML format macOS - -Standard options - -Here are the Standard options specific to drive (Google Drive). - ---drive-client-id - -Google Application Client Id Setting your own is recommended. See -https://rclone.org/drive/#making-your-own-client-id for how to create -your own. If you leave this blank, it will use an internal key which is -low performance. - -Properties: - -- Config: client_id -- Env Var: RCLONE_DRIVE_CLIENT_ID -- Type: string -- Required: false - ---drive-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_DRIVE_CLIENT_SECRET -- Type: string -- Required: false - ---drive-scope - -Scope that rclone should use when requesting access from drive. - -Properties: - -- Config: scope -- Env Var: RCLONE_DRIVE_SCOPE -- Type: string -- Required: false -- Examples: - - "drive" - - Full access all files, excluding Application Data Folder. - - "drive.readonly" - - Read-only access to file metadata and file contents. - - "drive.file" - - Access to files created by rclone only. - - These are visible in the drive website. - - File authorization is revoked when the user deauthorizes the - app. - - "drive.appfolder" - - Allows read and write access to the Application Data folder. - - This is not visible in the drive website. - - "drive.metadata.readonly" - - Allows read-only access to file metadata but - - does not allow any access to read or download file content. - ---drive-service-account-file - -Service Account Credentials JSON file path. - -Leave blank normally. Needed only if you want use SA instead of -interactive login. - -Leading ~ will be expanded in the file name as will environment -variables such as ${RCLONE_CONFIG_DIR}. - -Properties: - -- Config: service_account_file -- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE -- Type: string -- Required: false - ---drive-alternate-export - -Deprecated: No longer needed. - -Properties: - -- Config: alternate_export -- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT -- Type: bool -- Default: false - -Advanced options - -Here are the Advanced options specific to drive (Google Drive). - ---drive-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_DRIVE_TOKEN -- Type: string -- Required: false - ---drive-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_DRIVE_AUTH_URL -- Type: string -- Required: false - ---drive-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_DRIVE_TOKEN_URL -- Type: string -- Required: false - ---drive-root-folder-id - -ID of the root folder. Leave blank normally. - -Fill in to access "Computers" folders (see docs), or for rclone to use a -non root folder as its starting point. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -- Type: string -- Required: false - ---drive-service-account-credentials - -Service Account Credentials JSON blob. - -Leave blank normally. Needed only if you want use SA instead of -interactive login. - -Properties: - -- Config: service_account_credentials -- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS -- Type: string -- Required: false - ---drive-team-drive - -ID of the Shared Drive (Team Drive). - -Properties: - -- Config: team_drive -- Env Var: RCLONE_DRIVE_TEAM_DRIVE -- Type: string -- Required: false - ---drive-auth-owner-only - -Only consider files owned by the authenticated user. - -Properties: - -- Config: auth_owner_only -- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY -- Type: bool -- Default: false - ---drive-use-trash - -Send files to the trash instead of deleting permanently. - -Defaults to true, namely sending files to the trash. Use ---drive-use-trash=false to delete files permanently instead. - -Properties: - -- Config: use_trash -- Env Var: RCLONE_DRIVE_USE_TRASH -- Type: bool -- Default: true - ---drive-copy-shortcut-content - -Server side copy contents of shortcuts instead of the shortcut. - -When doing server side copies, normally rclone will copy shortcuts as -shortcuts. - -If this flag is used then rclone will copy the contents of shortcuts -rather than shortcuts themselves when doing server side copies. - -Properties: - -- Config: copy_shortcut_content -- Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT -- Type: bool -- Default: false - ---drive-skip-gdocs - -Skip google documents in all listings. - -If given, gdocs practically become invisible to rclone. - -Properties: - -- Config: skip_gdocs -- Env Var: RCLONE_DRIVE_SKIP_GDOCS -- Type: bool -- Default: false - ---drive-skip-checksum-gphotos - -Skip MD5 checksum on Google photos and videos only. - -Use this if you get checksum errors when transferring Google photos or -videos. - -Setting this flag will cause Google photos and videos to return a blank -MD5 checksum. - -Google photos are identified by being in the "photos" space. - -Corrupted checksums are caused by Google modifying the image/video but -not updating the checksum. - -Properties: - -- Config: skip_checksum_gphotos -- Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS -- Type: bool -- Default: false - ---drive-shared-with-me - -Only show files that are shared with me. - -Instructs rclone to operate on your "Shared with me" folder (where -Google Drive lets you access the files and folders others have shared -with you). - -This works both with the "list" (lsd, lsl, etc.) and the "copy" commands -(copy, sync, etc.), and with all other commands too. - -Properties: - -- Config: shared_with_me -- Env Var: RCLONE_DRIVE_SHARED_WITH_ME -- Type: bool -- Default: false - ---drive-trashed-only - -Only show files that are in the trash. - -This will show trashed files in their original directory structure. - -Properties: - -- Config: trashed_only -- Env Var: RCLONE_DRIVE_TRASHED_ONLY -- Type: bool -- Default: false - ---drive-starred-only - -Only show files that are starred. - -Properties: - -- Config: starred_only -- Env Var: RCLONE_DRIVE_STARRED_ONLY -- Type: bool -- Default: false - ---drive-formats - -Deprecated: See export_formats. - -Properties: - -- Config: formats -- Env Var: RCLONE_DRIVE_FORMATS -- Type: string -- Required: false - ---drive-export-formats - -Comma separated list of preferred formats for downloading Google docs. - -Properties: - -- Config: export_formats -- Env Var: RCLONE_DRIVE_EXPORT_FORMATS -- Type: string -- Default: "docx,xlsx,pptx,svg" - ---drive-import-formats - -Comma separated list of preferred formats for uploading Google docs. - -Properties: - -- Config: import_formats -- Env Var: RCLONE_DRIVE_IMPORT_FORMATS -- Type: string -- Required: false - ---drive-allow-import-name-change - -Allow the filetype to change when uploading Google docs. - -E.g. file.doc to file.docx. This will confuse sync and reupload every -time. - -Properties: - -- Config: allow_import_name_change -- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE -- Type: bool -- Default: false - ---drive-use-created-date - -Use file created date instead of modified date. - -Useful when downloading data and you want the creation date used in -place of the last modified date. - -WARNING: This flag may have some unexpected consequences. - -When uploading to your drive all files will be overwritten unless they -haven't been modified since their creation. And the inverse will occur -while downloading. This side effect can be avoided by using the -"--checksum" flag. - -This feature was implemented to retain photos capture date as recorded -by google photos. You will first need to check the "Create a Google -Photos folder" option in your google drive settings. You can then copy -or move the photos locally and use the date the image was taken -(created) set as the modification date. - -Properties: - -- Config: use_created_date -- Env Var: RCLONE_DRIVE_USE_CREATED_DATE -- Type: bool -- Default: false - ---drive-use-shared-date - -Use date file was shared instead of modified date. - -Note that, as with "--drive-use-created-date", this flag may have -unexpected consequences when uploading/downloading files. - -If both this flag and "--drive-use-created-date" are set, the created -date is used. - -Properties: - -- Config: use_shared_date -- Env Var: RCLONE_DRIVE_USE_SHARED_DATE -- Type: bool -- Default: false - ---drive-list-chunk - -Size of listing chunk 100-1000, 0 to disable. - -Properties: - -- Config: list_chunk -- Env Var: RCLONE_DRIVE_LIST_CHUNK -- Type: int -- Default: 1000 - ---drive-impersonate - -Impersonate this user when using a service account. - -Properties: - -- Config: impersonate -- Env Var: RCLONE_DRIVE_IMPERSONATE -- Type: string -- Required: false - ---drive-upload-cutoff - -Cutoff for switching to chunked upload. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 8Mi - ---drive-chunk-size - -Upload chunk size. - -Must a power of 2 >= 256k. - -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. - -Reducing this will reduce memory usage but decrease performance. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_DRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 8Mi - ---drive-acknowledge-abuse - -Set to allow files which return cannotDownloadAbusiveFile to be -downloaded. - -If downloading a file returns the error "This file has been identified -as malware or spam and cannot be downloaded" with the error code -"cannotDownloadAbusiveFile" then supply this flag to rclone to indicate -you acknowledge the risks of downloading the file and rclone will -download it anyway. - -Note that if you are using service account it will need Manager -permission (not Content Manager) to for this flag to work. If the SA -does not have the right permission, Google will just ignore the flag. - -Properties: - -- Config: acknowledge_abuse -- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE -- Type: bool -- Default: false - ---drive-keep-revision-forever - -Keep new head revision of each file forever. - -Properties: - -- Config: keep_revision_forever -- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER -- Type: bool -- Default: false - ---drive-size-as-quota - -Show sizes as storage quota usage, not actual size. - -Show the size of a file as the storage quota used. This is the current -version plus any older versions that have been set to keep forever. - -WARNING: This flag may have some unexpected consequences. - -It is not recommended to set this flag in your config - the recommended -usage is using the flag form --drive-size-as-quota when doing rclone -ls/lsl/lsf/lsjson/etc only. - -If you do use this flag for syncing (not recommended) then you will need -to use --ignore size also. - -Properties: - -- Config: size_as_quota -- Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA -- Type: bool -- Default: false - ---drive-v2-download-min-size - -If Object's are greater, use drive v2 API to download. - -Properties: - -- Config: v2_download_min_size -- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE -- Type: SizeSuffix -- Default: off - ---drive-pacer-min-sleep - -Minimum time to sleep between API calls. - -Properties: - -- Config: pacer_min_sleep -- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP -- Type: Duration -- Default: 100ms - ---drive-pacer-burst - -Number of API calls to allow without sleeping. - -Properties: - -- Config: pacer_burst -- Env Var: RCLONE_DRIVE_PACER_BURST -- Type: int -- Default: 100 - ---drive-server-side-across-configs - -Deprecated: use --server-side-across-configs instead. - -Allow server-side operations (e.g. copy) to work across different drive -configs. - -This can be useful if you wish to do a server-side copy between two -different Google drives. Note that this isn't enabled by default because -it isn't easy to tell if it will work between any two configurations. - -Properties: - -- Config: server_side_across_configs -- Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS -- Type: bool -- Default: false - ---drive-disable-http2 - -Disable drive using http2. - -There is currently an unsolved issue with the google drive backend and -HTTP/2. HTTP/2 is therefore disabled by default for the drive backend -but can be re-enabled here. When the issue is solved this flag will be -removed. - -See: https://github.com/rclone/rclone/issues/3631 - -Properties: - -- Config: disable_http2 -- Env Var: RCLONE_DRIVE_DISABLE_HTTP2 -- Type: bool -- Default: true - ---drive-stop-on-upload-limit - -Make upload limit errors be fatal. - -At the time of writing it is only possible to upload 750 GiB of data to -Google Drive a day (this is an undocumented limit). When this limit is -reached Google Drive produces a slightly different error message. When -this flag is set it causes these errors to be fatal. These will stop the -in-progress sync. - -Note that this detection is relying on error message strings which -Google don't document so it may break in the future. - -See: https://github.com/rclone/rclone/issues/3857 - -Properties: - -- Config: stop_on_upload_limit -- Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT -- Type: bool -- Default: false - ---drive-stop-on-download-limit - -Make download limit errors be fatal. - -At the time of writing it is only possible to download 10 TiB of data -from Google Drive a day (this is an undocumented limit). When this limit -is reached Google Drive produces a slightly different error message. -When this flag is set it causes these errors to be fatal. These will -stop the in-progress sync. - -Note that this detection is relying on error message strings which -Google don't document so it may break in the future. - -Properties: - -- Config: stop_on_download_limit -- Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT -- Type: bool -- Default: false - ---drive-skip-shortcuts - -If set skip shortcut files. - -Normally rclone dereferences shortcut files making them appear as if -they are the original file (see the shortcuts section). If this flag is -set then rclone will ignore shortcut files completely. - -Properties: - -- Config: skip_shortcuts -- Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS -- Type: bool -- Default: false - ---drive-skip-dangling-shortcuts - -If set skip dangling shortcut files. - -If this is set then rclone will not show any dangling shortcuts in -listings. - -Properties: - -- Config: skip_dangling_shortcuts -- Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS -- Type: bool -- Default: false - ---drive-resource-key - -Resource key for accessing a link-shared file. - -If you need to access files shared with a link like this - - https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing - -Then you will need to use the first part "XXX" as the "root_folder_id" -and the second part "YYY" as the "resource_key" otherwise you will get -404 not found errors when trying to access the directory. - -See: https://developers.google.com/drive/api/guides/resource-keys - -This resource key requirement only applies to a subset of old files. - -Note also that opening the folder once in the web interface (with the -user you've authenticated rclone with) seems to be enough so that the -resource key is no needed. - -Properties: - -- Config: resource_key -- Env Var: RCLONE_DRIVE_RESOURCE_KEY -- Type: string -- Required: false - ---drive-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_DRIVE_ENCODING -- Type: MultiEncoder -- Default: InvalidUtf8 - ---drive-env-auth - -Get IAM credentials from runtime (environment variables or instance meta -data if no env vars). - -Only applies if service_account_file and service_account_credentials is -blank. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_DRIVE_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or - IAM). - -Backend commands - -Here are the commands specific to the drive backend. - -Run them with - - rclone backend COMMAND remote: - -The help below will explain what arguments each command takes. - -See the backend command for more info on how to pass options and -arguments. - -These can be run on a running backend using the rc command -backend/command. - -get - -Get command for fetching the drive config parameters - - rclone backend get remote: [options] [+] - -This is a get command which will be used to fetch the various drive -config parameters - -Usage Examples: - - rclone backend get drive: [-o service_account_file] [-o chunk_size] - rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] - -Options: - -- "chunk_size": show the current upload chunk size -- "service_account_file": show the current service account file - -set - -Set command for updating the drive config parameters - - rclone backend set remote: [options] [+] - -This is a set command which will be used to update the various drive -config parameters - -Usage Examples: - - rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] - rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] - -Options: - -- "chunk_size": update the current upload chunk size -- "service_account_file": update the current service account file - -shortcut - -Create shortcuts from files or directories - - rclone backend shortcut remote: [options] [+] - -This command creates shortcuts from files or directories. - -Usage: - - rclone backend shortcut drive: source_item destination_shortcut - rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut - -In the first example this creates a shortcut from the "source_item" -which can be a file or a directory to the "destination_shortcut". The -"source_item" and the "destination_shortcut" should be relative paths -from "drive:" - -In the second example this creates a shortcut from the "source_item" -relative to "drive:" to the "destination_shortcut" relative to -"drive2:". This may fail with a permission error if the user -authenticated with "drive2:" can't read files from "drive:". - -Options: - -- "target": optional target remote for the shortcut destination - -drives - -List the Shared Drives available to this account - - rclone backend drives remote: [options] [+] - -This command lists the Shared Drives (Team Drives) available to this -account. - -Usage: - - rclone backend [-o config] drives drive: - -This will return a JSON list of objects like this - - [ - { - "id": "0ABCDEF-01234567890", - "kind": "drive#teamDrive", - "name": "My Drive" - }, - { - "id": "0ABCDEFabcdefghijkl", - "kind": "drive#teamDrive", - "name": "Test Drive" - } - ] - -With the -o config parameter it will output the list in a format -suitable for adding to a config file to make aliases for all the drives -found and a combined drive. - - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: - - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: - - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" - -Adding this to the rclone config file will cause those team drives to be -accessible with the aliases shown. Any illegal characters will be -substituted with "_" and duplicate names will have numbers suffixed. It -will also add a remote called AllDrives which shows all the shared -drives combined into one directory tree. - -untrash - -Untrash files and directories - - rclone backend untrash remote: [options] [+] - -This command untrashes all the files and directories in the directory -passed in recursively. - -Usage: - -This takes an optional directory to trash which make this easier to use -via the API. - - rclone backend untrash drive:directory - rclone backend --interactive untrash drive:directory subdir - -Use the --interactive/-i or --dry-run flag to see what would be restored -before restoring it. - -Result: - - { - "Untrashed": 17, - "Errors": 0 - } - -copyid - -Copy files by ID - - rclone backend copyid remote: [options] [+] - -This command copies files by ID - -Usage: - - rclone backend copyid drive: ID path - rclone backend copyid drive: ID1 path1 ID2 path2 - -It copies the drive file with ID given to the path (an rclone path which -will be passed internally to rclone copyto). The ID and path pairs can -be repeated. - -The path should end with a / to indicate copy the file as named to this -directory. If it doesn't end with a / then the last path component will -be used as the file name. - -If the destination is a drive backend then server-side copying will be -attempted if possible. - -Use the --interactive/-i or --dry-run flag to see what would be copied -before copying. - -exportformats - -Dump the export formats for debug purposes - - rclone backend exportformats remote: [options] [+] - -importformats - -Dump the import formats for debug purposes - - rclone backend importformats remote: [options] [+] - -Limitations - -Drive has quite a lot of rate limiting. This causes rclone to be limited -to transferring about 2 files per second only. Individual files may be -transferred much faster at 100s of MiB/s but lots of small files can -take a long time. - -Server side copies are also subject to a separate rate limit. If you see -User rate limit exceeded errors, wait at least 24 hours and retry. You -can disable server-side copies with --disable copy to download and -upload the files if you prefer. - -Limitations of Google Docs - -Google docs will appear as size -1 in rclone ls, rclone ncdu etc, and as -size 0 in anything which uses the VFS layer, e.g. rclone mount and -rclone serve. When calculating directory totals, e.g. in rclone size and -rclone ncdu, they will be counted in as empty files. - -This is because rclone can't find out the size of the Google docs -without downloading them. - -Google docs will transfer correctly with rclone sync, rclone copy etc as -rclone knows to ignore the size when doing the transfer. - -However an unfortunate consequence of this is that you may not be able -to download Google docs using rclone mount. If it doesn't work you will -get a 0 sized file. If you try again the doc may gain its correct size -and be downloadable. Whether it will work on not depends on the -application accessing the mount and the OS you are running - experiment -to find out if it does work for you! - -Duplicated files - -Sometimes, for no reason I've been able to track down, drive will -duplicate a file that rclone uploads. Drive unlike all the other remotes -can have duplicated files. - -Duplicated files cause problems with the syncing and you will see -messages in the log about duplicates. - -Use rclone dedupe to fix duplicated files. - -Note that this isn't just a problem with rclone, even Google Photos on -Android duplicates files on drive sometimes. - -Rclone appears to be re-copying files it shouldn't - -The most likely cause of this is the duplicated file issue above - run -rclone dedupe and check your logs for duplicate object or directory -messages. - -This can also be caused by a delay/caching on google drive's end when -comparing directory listings. Specifically with team drives used in -combination with --fast-list. Files that were uploaded recently may not -appear on the directory list sent to rclone when using --fast-list. - -Waiting a moderate period of time between attempts (estimated to be -approximately 1 hour) and/or not using --fast-list both seem to be -effective in preventing the problem. - -Making your own client_id - -When you use rclone with Google drive in its default configuration you -are using rclone's client_id. This is shared between all the rclone -users. There is a global rate limit on the number of queries per second -that each client_id can do set by Google. rclone already has a high -quota and I will continue to make sure it is high enough by contacting -Google. - -It is strongly recommended to use your own client ID as the default -rclone ID is heavily used. If you have multiple services running, it is -recommended to use an API key for each service. The default Google quota -is 10 transactions per second so it is recommended to stay under that -number as if you use more than that, it will cause rclone to rate limit -and make things slower. - -Here is how to create your own Google Drive client ID for rclone: - -1. Log into the Google API Console with your Google account. It doesn't - matter what Google account you use. (It need not be the same account - as the Google Drive you want to access) - -2. Select a project or create a new project. - -3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the - "Google Drive API". - -4. Click "Credentials" in the left-side panel (not "Create - credentials", which opens the wizard), then "Create credentials" - -5. If you already configured an "Oauth Consent Screen", then skip to - the next step; if not, click on "CONFIGURE CONSENT SCREEN" button - (near the top right corner of the right panel), then select - "External" and click on "CREATE"; on the next screen, enter an - "Application name" ("rclone" is OK); enter "User Support Email" - (your own email is OK); enter "Developer Contact Email" (your own - email is OK); then click on "Save" (all other data is optional). You - will also have to add some scopes, including .../auth/docs and - .../auth/drive in order to be able to edit, create and delete files - with RClone. You may also want to include the - ../auth/drive.metadata.readonly scope. After adding scopes, click - "Save and continue" to add test users. Be sure to add your own - account to the test users. Once you've added yourself as a test user - and saved the changes, click again on "Credentials" on the left - panel to go back to the "Credentials" screen. - - (PS: if you are a GSuite user, you could also select "Internal" - instead of "External" above, but this will restrict API use to - Google Workspace users in your organisation). - -6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, - then select "OAuth client ID". - -7. Choose an application type of "Desktop app" and click "Create". (the - default name is fine) - -8. It will show you a client ID and client secret. Make a note of - these. - - (If you selected "External" at Step 5 continue to Step 9. If you - chose "Internal" you don't need to publish and can skip straight to - Step 10 but your destination drive must be part of the same Google - Workspace.) - -9. Go to "Oauth consent screen" and then click "PUBLISH APP" button and - confirm. You will also want to add yourself as a test user. - -10. Provide the noted client ID and client secret to rclone. - -Be aware that, due to the "enhanced security" recently introduced by -Google, you are theoretically expected to "submit your app for -verification" and then wait a few weeks(!) for their response; in -practice, you can go right ahead and use the client ID and client secret -with rclone, the only issue will be a very scary confirmation screen -shown when you connect via your browser for rclone to be able to get its -token-id (but as this only happens during the remote configuration, it's -not such a big deal). Keeping the application in "Testing" will work as -well, but the limitation is that any grants will expire after a week, -which can be annoying to refresh constantly. If, for whatever reason, a -short grant time is not a problem, then keeping the application in -testing mode would also be sufficient. - -(Thanks to @balazer on github for these instructions.) - -Sometimes, creation of an OAuth consent in Google API Console fails due -to an error message “The request failed because changes to one of the -field of the resource is not supported”. As a convenient workaround, the -necessary Google Drive API key can be created on the Python Quickstart -page. Just push the Enable the Drive API button to receive the Client ID -and Secret. Note that it will automatically create a new project in the -API Console. - -Google Photos - -The rclone backend for Google Photos is a specialized backend for -transferring photos and videos to and from Google Photos. - -NB The Google Photos API which rclone uses has quite a few limitations, -so please read the limitations section carefully to make sure it is -suitable for your use. - -Configuration - -The initial setup for google cloud storage involves getting a token from -Google Photos which you need to do in your browser. rclone config walks -you through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Google Photos - \ "google photos" - [snip] - Storage> google photos - ** See help for google photos backend at: https://rclone.org/googlephotos/ ** + Note that Google Drive takes some time (minutes to days) to empty the + trash even though the command returns within a few seconds. No output + is echoed, so there will be no confirmation even using -v or -vv. + + ### Quota information + + To view your current quota you can use the `rclone about remote:` + command which will display your usage limit (quota), the usage in Google + Drive, the size of all files in the Trash and the space used by other + Google services such as Gmail. This command does not take any path + arguments. + + #### Import/Export of google documents + + Google documents can be exported from and uploaded to Google Drive. + + When rclone downloads a Google doc it chooses a format to download + depending upon the `--drive-export-formats` setting. + By default the export formats are `docx,xlsx,pptx,svg` which are a + sensible default for an editable document. + + When choosing a format, rclone runs down the list provided in order + and chooses the first file format the doc can be exported as from the + list. If the file can't be exported to a format on the formats list, + then rclone will choose a format from the default list. + + If you prefer an archive copy then you might use `--drive-export-formats + pdf`, or if you prefer openoffice/libreoffice formats you might use + `--drive-export-formats ods,odt,odp`. + + Note that rclone adds the extension to the google doc, so if it is + called `My Spreadsheet` on google docs, it will be exported as `My + Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc. + + When importing files into Google Drive, rclone will convert all + files with an extension in `--drive-import-formats` to their + associated document type. + rclone will not convert any files by default, since the conversion + is lossy process. + + The conversion must result in a file with the same extension when + the `--drive-export-formats` rules are applied to the uploaded document. + + Here are some examples for allowed and prohibited conversions. + + | export-formats | import-formats | Upload Ext | Document Ext | Allowed | + | -------------- | -------------- | ---------- | ------------ | ------- | + | odt | odt | odt | odt | Yes | + | odt | docx,odt | odt | odt | Yes | + | | docx | docx | docx | Yes | + | | odt | odt | docx | No | + | odt,docx | docx,odt | docx | odt | No | + | docx,odt | docx,odt | docx | docx | Yes | + | docx,odt | docx,odt | odt | docx | No | + + This limitation can be disabled by specifying `--drive-allow-import-name-change`. + When using this flag, rclone can convert multiple files types resulting + in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`, + all files having these extension would result in a document represented as a docx file. + This brings the additional risk of overwriting a document, if multiple files + have the same stem. Many rclone operations will not handle this name change + in any way. They assume an equal name when copying files and might copy the + file again or delete them when the name changes. + + Here are the possible export extensions with their corresponding mime types. + Most of these can also be used for importing, but there more that are not + listed here. Some of these additional ones might only be available when + the operating system provides the correct MIME type entries. + + This list can be changed by Google Drive at any time and might not + represent the currently available conversions. + + | Extension | Mime Type | Description | + | --------- |-----------| ------------| + | bmp | image/bmp | Windows Bitmap format | + | csv | text/csv | Standard CSV format for Spreadsheets | + | doc | application/msword | Classic Word file | + | docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | + | epub | application/epub+zip | E-book format | + | html | text/html | An HTML Document | + | jpg | image/jpeg | A JPEG Image File | + | json | application/vnd.google-apps.script+json | JSON Text Format for Google Apps scripts | + | odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | + | ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | + | ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | + | odt | application/vnd.oasis.opendocument.text | Openoffice Document | + | pdf | application/pdf | Adobe PDF Format | + | pjpeg | image/pjpeg | Progressive JPEG Image | + | png | image/png | PNG Image Format| + | pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | + | rtf | application/rtf | Rich Text Format | + | svg | image/svg+xml | Scalable Vector Graphics Format | + | tsv | text/tab-separated-values | Standard TSV format for spreadsheets | + | txt | text/plain | Plain Text | + | wmf | application/x-msmetafile | Windows Meta File | + | xls | application/vnd.ms-excel | Classic Excel file | + | xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | + | zip | application/zip | A ZIP file of HTML, Images CSS | + + Google documents can also be exported as link files. These files will + open a browser window for the Google Docs website of that document + when opened. The link file extension has to be specified as a + `--drive-export-formats` parameter. They will match all available + Google Documents. + + | Extension | Description | OS Support | + | --------- | ----------- | ---------- | + | desktop | freedesktop.org specified desktop entry | Linux | + | link.html | An HTML Document with a redirect | All | + | url | INI style link file | macOS, Windows | + | webloc | macOS specific XML format | macOS | + + + ### Standard options + + Here are the Standard options specific to drive (Google Drive). + + #### --drive-client-id Google Application Client Id + Setting your own is recommended. + See https://rclone.org/drive/#making-your-own-client-id for how to create your own. + If you leave this blank, it will use an internal key which is low performance. + + Properties: + + - Config: client_id + - Env Var: RCLONE_DRIVE_CLIENT_ID + - Type: string + - Required: false + + #### --drive-client-secret + + OAuth Client Secret. + Leave blank normally. - Enter a string value. Press Enter for the default (""). - client_id> - Google Application Client Secret + + Properties: + + - Config: client_secret + - Env Var: RCLONE_DRIVE_CLIENT_SECRET + - Type: string + - Required: false + + #### --drive-scope + + Scope that rclone should use when requesting access from drive. + + Properties: + + - Config: scope + - Env Var: RCLONE_DRIVE_SCOPE + - Type: string + - Required: false + - Examples: + - "drive" + - Full access all files, excluding Application Data Folder. + - "drive.readonly" + - Read-only access to file metadata and file contents. + - "drive.file" + - Access to files created by rclone only. + - These are visible in the drive website. + - File authorization is revoked when the user deauthorizes the app. + - "drive.appfolder" + - Allows read and write access to the Application Data folder. + - This is not visible in the drive website. + - "drive.metadata.readonly" + - Allows read-only access to file metadata but + - does not allow any access to read or download file content. + + #### --drive-service-account-file + + Service Account Credentials JSON file path. + Leave blank normally. - Enter a string value. Press Enter for the default (""). - client_secret> - Set to make the Google Photos backend read only. + Needed only if you want use SA instead of interactive login. - If you choose read only then rclone will only request read only access - to your photos, otherwise rclone will request full access. - Enter a boolean value (true or false). Press Enter for the default ("false"). - read_only> - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code + Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`. - *** IMPORTANT: All media items uploaded to Google Photos with rclone - *** are stored in full resolution at original quality. These uploads - *** will count towards storage in your Google Account. + Properties: - -------------------- - [remote] - type = google photos - token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + - Config: service_account_file + - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE + - Type: string + - Required: false -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + #### --drive-alternate-export -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically -authenticate. This only runs from the moment it opens your browser to -the moment you get back the verification code. This is on -http://127.0.0.1:53682/ and this may require you to unblock it -temporarily if you are running a host firewall, or use manual mode. + Deprecated: No longer needed. -This remote is called remote and can now be used like this + Properties: -See all the albums in your photos + - Config: alternate_export + - Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT + - Type: bool + - Default: false - rclone lsd remote:album + ### Advanced options -Make a new album + Here are the Advanced options specific to drive (Google Drive). - rclone mkdir remote:album/newAlbum + #### --drive-token -List the contents of an album + OAuth Access Token as a JSON blob. - rclone ls remote:album/newAlbum + Properties: -Sync /home/local/images to the Google Photos, removing any excess files -in the album. + - Config: token + - Env Var: RCLONE_DRIVE_TOKEN + - Type: string + - Required: false - rclone sync --interactive /home/local/image remote:album/newAlbum + #### --drive-auth-url -Layout + Auth server URL. -As Google Photos is not a general purpose cloud storage system, the -backend is laid out to help you navigate it. + Leave blank to use the provider defaults. -The directories under media show different ways of categorizing the -media. Each file will appear multiple times. So if you want to make a -backup of your google photos you might choose to backup -remote:media/by-month. (NB remote:media/by-day is rather slow at the -moment so avoid for syncing.) + Properties: -Note that all your photos and videos will appear somewhere under media, -but they may not appear under album unless you've put them into albums. + - Config: auth_url + - Env Var: RCLONE_DRIVE_AUTH_URL + - Type: string + - Required: false - / - - upload - - file1.jpg - - file2.jpg - - ... - - media - - all - - file1.jpg - - file2.jpg - - ... - - by-year - - 2000 - - file1.jpg - - ... - - 2001 - - file2.jpg - - ... - - ... - - by-month - - 2000 - - 2000-01 - - file1.jpg - - ... - - 2000-02 - - file2.jpg - - ... - - ... - - by-day - - 2000 - - 2000-01-01 - - file1.jpg - - ... - - 2000-01-02 - - file2.jpg - - ... - - ... - - album - - album name - - album name/sub - - shared-album - - album name - - album name/sub - - feature - - favorites - - file1.jpg - - file2.jpg + #### --drive-token-url -There are two writable parts of the tree, the upload directory and sub -directories of the album directory. + Token server url. -The upload directory is for uploading files you don't want to put into -albums. This will be empty to start with and will contain the files -you've uploaded for one rclone session only, becoming empty again when -you restart rclone. The use case for this would be if you have a load of -files you just want to once off dump into Google Photos. For repeated -syncing, uploading to album will work better. + Leave blank to use the provider defaults. -Directories within the album directory are also writeable and you may -create new directories (albums) under album. If you copy files with a -directory hierarchy in there then rclone will create albums with the / -character in them. For example if you do + Properties: - rclone copy /path/to/images remote:album/images + - Config: token_url + - Env Var: RCLONE_DRIVE_TOKEN_URL + - Type: string + - Required: false -and the images directory contains + #### --drive-root-folder-id - images - - file1.jpg - dir - file2.jpg - dir2 - dir3 - file3.jpg + ID of the root folder. + Leave blank normally. -Then rclone will create the following albums with the following files in + Fill in to access "Computers" folders (see docs), or for rclone to use + a non root folder as its starting point. -- images - - file1.jpg -- images/dir - - file2.jpg -- images/dir2/dir3 - - file3.jpg -This means that you can use the album path pretty much like a normal -filesystem and it is a good target for repeated syncing. + Properties: -The shared-album directory shows albums shared with you or by you. This -is similar to the Sharing tab in the Google Photos web interface. + - Config: root_folder_id + - Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID + - Type: string + - Required: false -Standard options + #### --drive-service-account-credentials -Here are the Standard options specific to google photos (Google Photos). + Service Account Credentials JSON blob. ---gphotos-client-id + Leave blank normally. + Needed only if you want use SA instead of interactive login. -OAuth Client Id. + Properties: -Leave blank normally. + - Config: service_account_credentials + - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS + - Type: string + - Required: false -Properties: + #### --drive-team-drive -- Config: client_id -- Env Var: RCLONE_GPHOTOS_CLIENT_ID -- Type: string -- Required: false + ID of the Shared Drive (Team Drive). ---gphotos-client-secret + Properties: -OAuth Client Secret. + - Config: team_drive + - Env Var: RCLONE_DRIVE_TEAM_DRIVE + - Type: string + - Required: false -Leave blank normally. + #### --drive-auth-owner-only -Properties: + Only consider files owned by the authenticated user. -- Config: client_secret -- Env Var: RCLONE_GPHOTOS_CLIENT_SECRET -- Type: string -- Required: false + Properties: ---gphotos-read-only + - Config: auth_owner_only + - Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY + - Type: bool + - Default: false -Set to make the Google Photos backend read only. + #### --drive-use-trash + + Send files to the trash instead of deleting permanently. + + Defaults to true, namely sending files to the trash. + Use `--drive-use-trash=false` to delete files permanently instead. + + Properties: + + - Config: use_trash + - Env Var: RCLONE_DRIVE_USE_TRASH + - Type: bool + - Default: true + + #### --drive-copy-shortcut-content + + Server side copy contents of shortcuts instead of the shortcut. + + When doing server side copies, normally rclone will copy shortcuts as + shortcuts. + + If this flag is used then rclone will copy the contents of shortcuts + rather than shortcuts themselves when doing server side copies. + + Properties: + + - Config: copy_shortcut_content + - Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT + - Type: bool + - Default: false + + #### --drive-skip-gdocs + + Skip google documents in all listings. + + If given, gdocs practically become invisible to rclone. + + Properties: + + - Config: skip_gdocs + - Env Var: RCLONE_DRIVE_SKIP_GDOCS + - Type: bool + - Default: false + + #### --drive-skip-checksum-gphotos + + Skip MD5 checksum on Google photos and videos only. + + Use this if you get checksum errors when transferring Google photos or + videos. + + Setting this flag will cause Google photos and videos to return a + blank MD5 checksum. + + Google photos are identified by being in the "photos" space. + + Corrupted checksums are caused by Google modifying the image/video but + not updating the checksum. + + Properties: + + - Config: skip_checksum_gphotos + - Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS + - Type: bool + - Default: false + + #### --drive-shared-with-me + + Only show files that are shared with me. + + Instructs rclone to operate on your "Shared with me" folder (where + Google Drive lets you access the files and folders others have shared + with you). + + This works both with the "list" (lsd, lsl, etc.) and the "copy" + commands (copy, sync, etc.), and with all other commands too. + + Properties: + + - Config: shared_with_me + - Env Var: RCLONE_DRIVE_SHARED_WITH_ME + - Type: bool + - Default: false + + #### --drive-trashed-only + + Only show files that are in the trash. + + This will show trashed files in their original directory structure. + + Properties: + + - Config: trashed_only + - Env Var: RCLONE_DRIVE_TRASHED_ONLY + - Type: bool + - Default: false + + #### --drive-starred-only + + Only show files that are starred. + + Properties: + + - Config: starred_only + - Env Var: RCLONE_DRIVE_STARRED_ONLY + - Type: bool + - Default: false + + #### --drive-formats + + Deprecated: See export_formats. + + Properties: + + - Config: formats + - Env Var: RCLONE_DRIVE_FORMATS + - Type: string + - Required: false + + #### --drive-export-formats + + Comma separated list of preferred formats for downloading Google docs. + + Properties: + + - Config: export_formats + - Env Var: RCLONE_DRIVE_EXPORT_FORMATS + - Type: string + - Default: "docx,xlsx,pptx,svg" + + #### --drive-import-formats + + Comma separated list of preferred formats for uploading Google docs. + + Properties: + + - Config: import_formats + - Env Var: RCLONE_DRIVE_IMPORT_FORMATS + - Type: string + - Required: false + + #### --drive-allow-import-name-change + + Allow the filetype to change when uploading Google docs. + + E.g. file.doc to file.docx. This will confuse sync and reupload every time. + + Properties: + + - Config: allow_import_name_change + - Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE + - Type: bool + - Default: false + + #### --drive-use-created-date + + Use file created date instead of modified date. + + Useful when downloading data and you want the creation date used in + place of the last modified date. + + **WARNING**: This flag may have some unexpected consequences. + + When uploading to your drive all files will be overwritten unless they + haven't been modified since their creation. And the inverse will occur + while downloading. This side effect can be avoided by using the + "--checksum" flag. + + This feature was implemented to retain photos capture date as recorded + by google photos. You will first need to check the "Create a Google + Photos folder" option in your google drive settings. You can then copy + or move the photos locally and use the date the image was taken + (created) set as the modification date. + + Properties: + + - Config: use_created_date + - Env Var: RCLONE_DRIVE_USE_CREATED_DATE + - Type: bool + - Default: false + + #### --drive-use-shared-date + + Use date file was shared instead of modified date. + + Note that, as with "--drive-use-created-date", this flag may have + unexpected consequences when uploading/downloading files. + + If both this flag and "--drive-use-created-date" are set, the created + date is used. + + Properties: + + - Config: use_shared_date + - Env Var: RCLONE_DRIVE_USE_SHARED_DATE + - Type: bool + - Default: false + + #### --drive-list-chunk + + Size of listing chunk 100-1000, 0 to disable. + + Properties: + + - Config: list_chunk + - Env Var: RCLONE_DRIVE_LIST_CHUNK + - Type: int + - Default: 1000 + + #### --drive-impersonate + + Impersonate this user when using a service account. + + Properties: + + - Config: impersonate + - Env Var: RCLONE_DRIVE_IMPERSONATE + - Type: string + - Required: false + + #### --drive-upload-cutoff + + Cutoff for switching to chunked upload. + + Properties: + + - Config: upload_cutoff + - Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 8Mi + + #### --drive-chunk-size + + Upload chunk size. + + Must a power of 2 >= 256k. + + Making this larger will improve performance, but note that each chunk + is buffered in memory one per transfer. + + Reducing this will reduce memory usage but decrease performance. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_DRIVE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 8Mi + + #### --drive-acknowledge-abuse + + Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + + If downloading a file returns the error "This file has been identified + as malware or spam and cannot be downloaded" with the error code + "cannotDownloadAbusiveFile" then supply this flag to rclone to + indicate you acknowledge the risks of downloading the file and rclone + will download it anyway. + + Note that if you are using service account it will need Manager + permission (not Content Manager) to for this flag to work. If the SA + does not have the right permission, Google will just ignore the flag. + + Properties: + + - Config: acknowledge_abuse + - Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE + - Type: bool + - Default: false + + #### --drive-keep-revision-forever + + Keep new head revision of each file forever. + + Properties: + + - Config: keep_revision_forever + - Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER + - Type: bool + - Default: false + + #### --drive-size-as-quota + + Show sizes as storage quota usage, not actual size. + + Show the size of a file as the storage quota used. This is the + current version plus any older versions that have been set to keep + forever. + + **WARNING**: This flag may have some unexpected consequences. + + It is not recommended to set this flag in your config - the + recommended usage is using the flag form --drive-size-as-quota when + doing rclone ls/lsl/lsf/lsjson/etc only. + + If you do use this flag for syncing (not recommended) then you will + need to use --ignore size also. + + Properties: + + - Config: size_as_quota + - Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA + - Type: bool + - Default: false + + #### --drive-v2-download-min-size + + If Object's are greater, use drive v2 API to download. + + Properties: + + - Config: v2_download_min_size + - Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE + - Type: SizeSuffix + - Default: off + + #### --drive-pacer-min-sleep + + Minimum time to sleep between API calls. + + Properties: + + - Config: pacer_min_sleep + - Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP + - Type: Duration + - Default: 100ms + + #### --drive-pacer-burst + + Number of API calls to allow without sleeping. + + Properties: + + - Config: pacer_burst + - Env Var: RCLONE_DRIVE_PACER_BURST + - Type: int + - Default: 100 + + #### --drive-server-side-across-configs + + Deprecated: use --server-side-across-configs instead. + + Allow server-side operations (e.g. copy) to work across different drive configs. + + This can be useful if you wish to do a server-side copy between two + different Google drives. Note that this isn't enabled by default + because it isn't easy to tell if it will work between any two + configurations. + + Properties: + + - Config: server_side_across_configs + - Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS + - Type: bool + - Default: false + + #### --drive-disable-http2 + + Disable drive using http2. + + There is currently an unsolved issue with the google drive backend and + HTTP/2. HTTP/2 is therefore disabled by default for the drive backend + but can be re-enabled here. When the issue is solved this flag will + be removed. + + See: https://github.com/rclone/rclone/issues/3631 + + + + Properties: + + - Config: disable_http2 + - Env Var: RCLONE_DRIVE_DISABLE_HTTP2 + - Type: bool + - Default: true + + #### --drive-stop-on-upload-limit + + Make upload limit errors be fatal. + + At the time of writing it is only possible to upload 750 GiB of data to + Google Drive a day (this is an undocumented limit). When this limit is + reached Google Drive produces a slightly different error message. When + this flag is set it causes these errors to be fatal. These will stop + the in-progress sync. + + Note that this detection is relying on error message strings which + Google don't document so it may break in the future. + + See: https://github.com/rclone/rclone/issues/3857 + + + Properties: + + - Config: stop_on_upload_limit + - Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT + - Type: bool + - Default: false + + #### --drive-stop-on-download-limit + + Make download limit errors be fatal. + + At the time of writing it is only possible to download 10 TiB of data from + Google Drive a day (this is an undocumented limit). When this limit is + reached Google Drive produces a slightly different error message. When + this flag is set it causes these errors to be fatal. These will stop + the in-progress sync. + + Note that this detection is relying on error message strings which + Google don't document so it may break in the future. + + + Properties: + + - Config: stop_on_download_limit + - Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT + - Type: bool + - Default: false + + #### --drive-skip-shortcuts + + If set skip shortcut files. + + Normally rclone dereferences shortcut files making them appear as if + they are the original file (see [the shortcuts section](#shortcuts)). + If this flag is set then rclone will ignore shortcut files completely. + + + Properties: + + - Config: skip_shortcuts + - Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS + - Type: bool + - Default: false + + #### --drive-skip-dangling-shortcuts + + If set skip dangling shortcut files. + + If this is set then rclone will not show any dangling shortcuts in listings. + + + Properties: + + - Config: skip_dangling_shortcuts + - Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS + - Type: bool + - Default: false + + #### --drive-resource-key + + Resource key for accessing a link-shared file. + + If you need to access files shared with a link like this + + https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing + + Then you will need to use the first part "XXX" as the "root_folder_id" + and the second part "YYY" as the "resource_key" otherwise you will get + 404 not found errors when trying to access the directory. + + See: https://developers.google.com/drive/api/guides/resource-keys + + This resource key requirement only applies to a subset of old files. + + Note also that opening the folder once in the web interface (with the + user you've authenticated rclone with) seems to be enough so that the + resource key is not needed. + + + Properties: + + - Config: resource_key + - Env Var: RCLONE_DRIVE_RESOURCE_KEY + - Type: string + - Required: false + + #### --drive-fast-list-bug-fix + + Work around a bug in Google Drive listing. + + Normally rclone will work around a bug in Google Drive when using + --fast-list (ListR) where the search "(A in parents) or (B in + parents)" returns nothing sometimes. See #3114, #4289 and + https://issuetracker.google.com/issues/149522397 + + Rclone detects this by finding no items in more than one directory + when listing and retries them as lists of individual directories. + + This means that if you have a lot of empty directories rclone will end + up listing them all individually and this can take many more API + calls. + + This flag allows the work-around to be disabled. This is **not** + recommended in normal use - only if you have a particular case you are + having trouble with like many empty directories. + + + Properties: + + - Config: fast_list_bug_fix + - Env Var: RCLONE_DRIVE_FAST_LIST_BUG_FIX + - Type: bool + - Default: true + + #### --drive-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_DRIVE_ENCODING + - Type: MultiEncoder + - Default: InvalidUtf8 + + #### --drive-env-auth + + Get IAM credentials from runtime (environment variables or instance meta data if no env vars). + + Only applies if service_account_file and service_account_credentials is blank. + + Properties: + + - Config: env_auth + - Env Var: RCLONE_DRIVE_ENV_AUTH + - Type: bool + - Default: false + - Examples: + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). + + ## Backend commands + + Here are the commands specific to the drive backend. + + Run them with + + rclone backend COMMAND remote: + + The help below will explain what arguments each command takes. + + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. + + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). + + ### get + + Get command for fetching the drive config parameters + + rclone backend get remote: [options] [+] + + This is a get command which will be used to fetch the various drive config parameters + + Usage Examples: + + rclone backend get drive: [-o service_account_file] [-o chunk_size] + rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] + + + Options: + + - "chunk_size": show the current upload chunk size + - "service_account_file": show the current service account file + + ### set + + Set command for updating the drive config parameters + + rclone backend set remote: [options] [+] + + This is a set command which will be used to update the various drive config parameters + + Usage Examples: + + rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] + rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] + + + Options: + + - "chunk_size": update the current upload chunk size + - "service_account_file": update the current service account file + + ### shortcut + + Create shortcuts from files or directories + + rclone backend shortcut remote: [options] [+] + + This command creates shortcuts from files or directories. + + Usage: + + rclone backend shortcut drive: source_item destination_shortcut + rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut + + In the first example this creates a shortcut from the "source_item" + which can be a file or a directory to the "destination_shortcut". The + "source_item" and the "destination_shortcut" should be relative paths + from "drive:" + + In the second example this creates a shortcut from the "source_item" + relative to "drive:" to the "destination_shortcut" relative to + "drive2:". This may fail with a permission error if the user + authenticated with "drive2:" can't read files from "drive:". + + + Options: + + - "target": optional target remote for the shortcut destination + + ### drives + + List the Shared Drives available to this account + + rclone backend drives remote: [options] [+] + + This command lists the Shared Drives (Team Drives) available to this + account. + + Usage: + + rclone backend [-o config] drives drive: + + This will return a JSON list of objects like this + + [ + { + "id": "0ABCDEF-01234567890", + "kind": "drive#teamDrive", + "name": "My Drive" + }, + { + "id": "0ABCDEFabcdefghijkl", + "kind": "drive#teamDrive", + "name": "Test Drive" + } + ] + + With the -o config parameter it will output the list in a format + suitable for adding to a config file to make aliases for all the + drives found and a combined drive. + + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" + + Adding this to the rclone config file will cause those team drives to + be accessible with the aliases shown. Any illegal characters will be + substituted with "_" and duplicate names will have numbers suffixed. + It will also add a remote called AllDrives which shows all the shared + drives combined into one directory tree. + + + ### untrash + + Untrash files and directories + + rclone backend untrash remote: [options] [+] + + This command untrashes all the files and directories in the directory + passed in recursively. + + Usage: + + This takes an optional directory to trash which make this easier to + use via the API. + + rclone backend untrash drive:directory + rclone backend --interactive untrash drive:directory subdir + + Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. + + Result: + + { + "Untrashed": 17, + "Errors": 0 + } + + + ### copyid + + Copy files by ID + + rclone backend copyid remote: [options] [+] + + This command copies files by ID + + Usage: + + rclone backend copyid drive: ID path + rclone backend copyid drive: ID1 path1 ID2 path2 + + It copies the drive file with ID given to the path (an rclone path which + will be passed internally to rclone copyto). The ID and path pairs can be + repeated. + + The path should end with a / to indicate copy the file as named to + this directory. If it doesn't end with a / then the last path + component will be used as the file name. + + If the destination is a drive backend then server-side copying will be + attempted if possible. + + Use the --interactive/-i or --dry-run flag to see what would be copied before copying. + + + ### exportformats + + Dump the export formats for debug purposes + + rclone backend exportformats remote: [options] [+] + + ### importformats + + Dump the import formats for debug purposes + + rclone backend importformats remote: [options] [+] + + + + ## Limitations + + Drive has quite a lot of rate limiting. This causes rclone to be + limited to transferring about 2 files per second only. Individual + files may be transferred much faster at 100s of MiB/s but lots of + small files can take a long time. + + Server side copies are also subject to a separate rate limit. If you + see User rate limit exceeded errors, wait at least 24 hours and retry. + You can disable server-side copies with `--disable copy` to download + and upload the files if you prefer. + + ### Limitations of Google Docs + + Google docs will appear as size -1 in `rclone ls`, `rclone ncdu` etc, + and as size 0 in anything which uses the VFS layer, e.g. `rclone mount` + and `rclone serve`. When calculating directory totals, e.g. in + `rclone size` and `rclone ncdu`, they will be counted in as empty + files. + + This is because rclone can't find out the size of the Google docs + without downloading them. + + Google docs will transfer correctly with `rclone sync`, `rclone copy` + etc as rclone knows to ignore the size when doing the transfer. + + However an unfortunate consequence of this is that you may not be able + to download Google docs using `rclone mount`. If it doesn't work you + will get a 0 sized file. If you try again the doc may gain its + correct size and be downloadable. Whether it will work on not depends + on the application accessing the mount and the OS you are running - + experiment to find out if it does work for you! + + ### Duplicated files + + Sometimes, for no reason I've been able to track down, drive will + duplicate a file that rclone uploads. Drive unlike all the other + remotes can have duplicated files. + + Duplicated files cause problems with the syncing and you will see + messages in the log about duplicates. + + Use `rclone dedupe` to fix duplicated files. + + Note that this isn't just a problem with rclone, even Google Photos on + Android duplicates files on drive sometimes. + + ### Rclone appears to be re-copying files it shouldn't + + The most likely cause of this is the duplicated file issue above - run + `rclone dedupe` and check your logs for duplicate object or directory + messages. + + This can also be caused by a delay/caching on google drive's end when + comparing directory listings. Specifically with team drives used in + combination with --fast-list. Files that were uploaded recently may + not appear on the directory list sent to rclone when using --fast-list. + + Waiting a moderate period of time between attempts (estimated to be + approximately 1 hour) and/or not using --fast-list both seem to be + effective in preventing the problem. + + ## Making your own client_id + + When you use rclone with Google drive in its default configuration you + are using rclone's client_id. This is shared between all the rclone + users. There is a global rate limit on the number of queries per + second that each client_id can do set by Google. rclone already has a + high quota and I will continue to make sure it is high enough by + contacting Google. + + It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. + + Here is how to create your own Google Drive client ID for rclone: + + 1. Log into the [Google API + Console](https://console.developers.google.com/) with your Google + account. It doesn't matter what Google account you use. (It need not + be the same account as the Google Drive you want to access) + + 2. Select a project or create a new project. + + 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the + "Google Drive API". + + 4. Click "Credentials" in the left-side panel (not "Create + credentials", which opens the wizard). + + 5. If you already configured an "Oauth Consent Screen", then skip + to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button + (near the top right corner of the right panel), then select "External" + and click on "CREATE"; on the next screen, enter an "Application name" + ("rclone" is OK); enter "User Support Email" (your own email is OK); + enter "Developer Contact Email" (your own email is OK); then click on + "Save" (all other data is optional). You will also have to add some scopes, + including `.../auth/docs` and `.../auth/drive` in order to be able to edit, + create and delete files with RClone. You may also want to include the + `../auth/drive.metadata.readonly` scope. After adding scopes, click + "Save and continue" to add test users. Be sure to add your own account to + the test users. Once you've added yourself as a test user and saved the + changes, click again on "Credentials" on the left panel to go back to + the "Credentials" screen. + + (PS: if you are a GSuite user, you could also select "Internal" instead + of "External" above, but this will restrict API use to Google Workspace + users in your organisation). + + 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, + then select "OAuth client ID". + + 7. Choose an application type of "Desktop app" and click "Create". (the default name is fine) + + 8. It will show you a client ID and client secret. Make a note of these. + + (If you selected "External" at Step 5 continue to Step 9. + If you chose "Internal" you don't need to publish and can skip straight to + Step 10 but your destination drive must be part of the same Google Workspace.) + + 9. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. + You will also want to add yourself as a test user. + + 10. Provide the noted client ID and client secret to rclone. + + Be aware that, due to the "enhanced security" recently introduced by + Google, you are theoretically expected to "submit your app for verification" + and then wait a few weeks(!) for their response; in practice, you can go right + ahead and use the client ID and client secret with rclone, the only issue will + be a very scary confirmation screen shown when you connect via your browser + for rclone to be able to get its token-id (but as this only happens during + the remote configuration, it's not such a big deal). Keeping the application in + "Testing" will work as well, but the limitation is that any grants will expire + after a week, which can be annoying to refresh constantly. If, for whatever + reason, a short grant time is not a problem, then keeping the application in + testing mode would also be sufficient. + + (Thanks to @balazer on github for these instructions.) + + Sometimes, creation of an OAuth consent in Google API Console fails due to an error message + “The request failed because changes to one of the field of the resource is not supported”. + As a convenient workaround, the necessary Google Drive API key can be created on the + [Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. + Just push the Enable the Drive API button to receive the Client ID and Secret. + Note that it will automatically create a new project in the API Console. + + # Google Photos + + The rclone backend for [Google Photos](https://www.google.com/photos/about/) is + a specialized backend for transferring photos and videos to and from + Google Photos. + + **NB** The Google Photos API which rclone uses has quite a few + limitations, so please read the [limitations section](#limitations) + carefully to make sure it is suitable for your use. + + ## Configuration + + The initial setup for google cloud storage involves getting a token from Google Photos + which you need to do in your browser. `rclone config` walks you + through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / Google +Photos  "google photos" [snip] Storage> google photos ** See help for +google photos backend at: https://rclone.org/googlephotos/ ** + +Google Application Client Id Leave blank normally. Enter a string value. +Press Enter for the default (""). client_id> Google Application Client +Secret Leave blank normally. Enter a string value. Press Enter for the +default (""). client_secret> Set to make the Google Photos backend read +only. If you choose read only then rclone will only request read only access -to your photos, otherwise rclone will request full access. +to your photos, otherwise rclone will request full access. Enter a +boolean value (true or false). Press Enter for the default ("false"). +read_only> Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config +Use web browser to automatically authenticate rclone with remote? * Say +Y if the machine running rclone has a web browser you can use * Say N if +running rclone on a (remote) machine without web browser access If not +sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser +doesn't open automatically go to the following link: +http://127.0.0.1:53682/auth Log in and authorize rclone for access +Waiting for code... Got code -Properties: +*** IMPORTANT: All media items uploaded to Google Photos with rclone *** +are stored in full resolution at original quality. These uploads *** +will count towards storage in your Google Account. -- Config: read_only -- Env Var: RCLONE_GPHOTOS_READ_ONLY -- Type: bool -- Default: false + ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + [remote] type = google photos token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} + ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` -Advanced options + See the remote setup docs for how to set it up on a machine with no Internet browser available. -Here are the Advanced options specific to google photos (Google Photos). + Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode. ---gphotos-token + This remote is called remote and can now be used like this -OAuth Access Token as a JSON blob. + See all the albums in your photos -Properties: + rclone lsd remote:album -- Config: token -- Env Var: RCLONE_GPHOTOS_TOKEN -- Type: string -- Required: false + Make a new album ---gphotos-auth-url + rclone mkdir remote:album/newAlbum -Auth server URL. + List the contents of an album -Leave blank to use the provider defaults. + rclone ls remote:album/newAlbum -Properties: + Sync /home/local/images to the Google Photos, removing any excess files in the album. -- Config: auth_url -- Env Var: RCLONE_GPHOTOS_AUTH_URL -- Type: string -- Required: false + rclone sync --interactive /home/local/image remote:album/newAlbum ---gphotos-token-url + ### Layout -Token server url. + As Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it. -Leave blank to use the provider defaults. + The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.) -Properties: + Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums. -- Config: token_url -- Env Var: RCLONE_GPHOTOS_TOKEN_URL -- Type: string -- Required: false + / - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg ---gphotos-read-size + There are two writable parts of the tree, the upload directory and sub directories of the album directory. -Set to read the size of media items. + The upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better. -Normally rclone does not read the size of media items since this takes -another transaction. This isn't necessary for syncing. However rclone -mount needs to know the size of files in advance of reading them, so -setting this flag when using rclone mount is recommended if you want to -read the media. + Directories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do -Properties: + rclone copy /path/to/images remote:album/images -- Config: read_size -- Env Var: RCLONE_GPHOTOS_READ_SIZE -- Type: bool -- Default: false + and the images directory contains ---gphotos-start-year + images - file1.jpg dir file2.jpg dir2 dir3 file3.jpg -Year limits the photos to be downloaded to those which are uploaded -after the given year. + Then rclone will create the following albums with the following files in -Properties: + - images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg -- Config: start_year -- Env Var: RCLONE_GPHOTOS_START_YEAR -- Type: int -- Default: 2000 + This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing. ---gphotos-include-archived + The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. -Also view and download archived media. + ### Standard options -By default, rclone does not request archived media. Thus, when syncing, -archived media is not visible in directory listings or transferred. + Here are the Standard options specific to google photos (Google Photos). -Note that media in albums is always visible and synced, no matter their -archive status. + #### --gphotos-client-id -With this flag, archived media are always visible in directory listings -and transferred. + OAuth Client Id. -Without this flag, archived media will not be visible in directory -listings and won't be transferred. + Leave blank normally. -Properties: + Properties: -- Config: include_archived -- Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED -- Type: bool -- Default: false + - Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Required: false ---gphotos-encoding + #### --gphotos-client-secret -The encoding for the backend. + OAuth Client Secret. -See the encoding section in the overview for more info. + Leave blank normally. -Properties: + Properties: -- Config: encoding -- Env Var: RCLONE_GPHOTOS_ENCODING -- Type: MultiEncoder -- Default: Slash,CrLf,InvalidUtf8,Dot + - Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: string - Required: false -Limitations + #### --gphotos-read-only -Only images and videos can be uploaded. If you attempt to upload non -videos or images or formats that Google Photos doesn't understand, -rclone will upload the file, then Google Photos will give an error when -it is put turned into a media item. + Set to make the Google Photos backend read only. -Note that all media items uploaded to Google Photos through the API are -stored in full resolution at "original quality" and will count towards -your storage quota in your Google Account. The API does not offer a way -to upload in "high quality" mode.. + If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. -rclone about is not supported by the Google Photos backend. Backends -without this capability cannot determine free space for an rclone mount -or use policy mfs (most free space) as a member of an rclone union -remote. + Properties: -See List of backends that do not support rclone about See rclone about + - Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false -Downloading Images + ### Advanced options -When Images are downloaded this strips EXIF location (according to the -docs and my tests). This is a limitation of the Google Photos API and is -covered by bug #112096115. + Here are the Advanced options specific to google photos (Google Photos). -The current google API does not allow photos to be downloaded at -original resolution. This is very important if you are, for example, -relying on "Google Photos" as a backup of your photos. You will not be -able to use rclone to redownload original images. You could use 'google -takeout' to recover the original photos as a last resort + #### --gphotos-token -Downloading Videos + OAuth Access Token as a JSON blob. -When videos are downloaded they are downloaded in a really compressed -version of the video compared to downloading it via the Google Photos -web interface. This is covered by bug #113672044. + Properties: -Duplicates + - Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Required: false -If a file name is duplicated in a directory then rclone will add the -file ID into its name. So two files called file.jpg would then appear as -file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer -alas!). + #### --gphotos-auth-url -If you upload the same image (with the same binary data) twice then -Google Photos will deduplicate it. However it will retain the filename -from the first upload which may confuse rclone. For example if you -uploaded an image to upload then uploaded the same image to -album/my_album the filename of the image in album/my_album will be what -it was uploaded with initially, not what you uploaded it with to album. -In practise this shouldn't cause too many problems. + Auth server URL. -Modified time + Leave blank to use the provider defaults. -The date shown of media in Google Photos is the creation date as -determined by the EXIF information, or the upload date if that is not -known. + Properties: -This is not changeable by rclone and is not the modification date of the -media on local disk. This means that rclone cannot use the dates from -Google Photos for syncing purposes. + - Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Required: false -Size + #### --gphotos-token-url -The Google Photos API does not return the size of media. This means that -when syncing to Google Photos, rclone can only do a file existence -check. + Token server url. -It is possible to read the size of the media, but this needs an extra -HTTP HEAD request per media item so is very slow and uses up a lot of -transactions. This can be enabled with the --gphotos-read-size option or -the read_size = true config parameter. + Leave blank to use the provider defaults. -If you want to use the backend with rclone mount you may need to enable -this flag (depending on your OS and application using the photos) -otherwise you may not be able to read media off the mount. You'll need -to experiment to see if it works for you without the flag. + Properties: -Albums + - Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Required: false -Rclone can only upload files to albums it created. This is a limitation -of the Google Photos API. + #### --gphotos-read-size -Rclone can remove files it uploaded from albums it created only. + Set to read the size of media items. -Deleting files + Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. -Rclone can remove files from albums it created, but note that the Google -Photos API does not allow media to be deleted permanently so this media -will still remain. See bug #109759781. + Properties: -Rclone cannot delete files anywhere except under album. + - Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false -Deleting albums + #### --gphotos-start-year -The Google Photos API does not support deleting albums - see bug -#135714733. + Year limits the photos to be downloaded to those which are uploaded after the given year. -Hasher + Properties: -Hasher is a special overlay backend to create remotes which handle -checksums for other remotes. It's main functions include: - Emulate hash -types unimplemented by backends - Cache checksums to help with slow -hashing of large local or (S)FTP files - Warm up checksum cache from -external SUM files + - Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000 -Getting started + #### --gphotos-include-archived -To use Hasher, first set up the underlying remote following the -configuration instructions for that remote. You can also use a local -pathname instead of a remote. Check that your base remote is working. + Also view and download archived media. -Let's call the base remote myRemote:path here. Note that anything inside -myRemote:path will be handled by hasher and anything outside won't. This -means that if you are using a bucket based remote (S3, B2, Swift) then -you should put the bucket in the remote s3:bucket. + By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred. -Now proceed to interactive or manual configuration. + Note that media in albums is always visible and synced, no matter their archive status. -Interactive configuration + With this flag, archived media are always visible in directory listings and transferred. -Run rclone config: + Without this flag, archived media will not be visible in directory listings and won't be transferred. + + Properties: + + - Config: include_archived - Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED - Type: bool - Default: false + + #### --gphotos-encoding + + The encoding for the backend. + + See the encoding section in the overview for more info. + + Properties: + + - Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot + + ## Limitations + + Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. + + Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode.. + + rclone about is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. + + See List of backends that do not support rclone about See rclone about + + ### Downloading Images + + When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115. + + The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort + + ### Downloading Videos + + When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044. + + ### Duplicates + + If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!). + + If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause + too many problems. + + ### Modified time + + The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. + + This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. + + ### Size + + The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. + + It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter. + + If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag. + + ### Albums + + Rclone can only upload files to albums it created. This is a limitation of the Google Photos API. + + Rclone can remove files it uploaded from albums it created only. + + ### Deleting files + + Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781. + + Rclone cannot delete files anywhere except under album. + + ### Deleting albums + + The Google Photos API does not support deleting albums - see bug #135714733. + + # Hasher + + Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files + + ## Getting started + + To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working. + + Let's call the base remote myRemote:path here. Note that anything inside myRemote:path will be handled by hasher and anything outside won't. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote s3:bucket. + + Now proceed to interactive or manual configuration. + + ### Interactive configuration + + Run rclone config: ``` No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> Hasher1 Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Handle checksums for other remotes  "hasher" [snip] Storage> hasher Remote to cache checksums for, like myremote:mypath. Enter a string value. Press Enter for the default (""). remote> myRemote:path Comma + separated list of supported checksum types. Enter a string value. Press Enter for the default ("md5,sha1"). hashsums> md5 Maximum time to keep checksums in cache. 0 = no cache, off = cache forever. max_age> off Edit advanced config? (y/n) y) Yes n) No y/n> n Remote config + ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +[Hasher1] type = hasher remote = myRemote:path hashsums = md5 max_age = +off -------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + ### Manual configuration + + Run `rclone config path` to see the path of current active config file, + usually `YOURHOME/.config/rclone/rclone.conf`. + Open it in your favorite text editor, find section for the base remote + and create new section for hasher like in the following examples: + +[Hasher1] type = hasher remote = myRemote:path hashes = md5 max_age = +off + +[Hasher2] type = hasher remote = /local/path hashes = dropbox,sha1 +max_age = 24h + + + Hasher takes basically the following parameters: + - `remote` is required, + - `hashes` is a comma separated list of supported checksums + (by default `md5,sha1`), + - `max_age` - maximum time to keep a checksum value in the cache, + `0` will disable caching completely, + `off` will cache "forever" (that is until the files get changed). + + Make sure the `remote` has `:` (colon) in. If you specify the remote without + a colon then rclone will use a local directory of that name. So if you use + a remote of `/local/path` then rclone will handle hashes for that directory. + If you use `remote = name` literally then rclone will put files + **in a directory called `name` located under current directory**. + + ## Usage + + ### Basic operations + + Now you can use it as `Hasher2:subdir/file` instead of base remote. + Hasher will transparently update cache with new checksums when a file + is fully read or overwritten, like: + +rclone copy External:path/file Hasher:dest/path + +rclone cat Hasher:path/to/file > /dev/null + + + The way to refresh **all** cached checksums (even unsupported by the base backend) + for a subtree is to **re-download** all files in the subtree. For example, + use `hashsum --download` using **any** supported hashsum on the command line + (we just care to re-read): + +rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null + +rclone backend dump Hasher:path/to/subtree + + + You can print or drop hashsum cache using custom backend commands: + +rclone backend dump Hasher:dir/subdir + +rclone backend drop Hasher: + + + ### Pre-Seed from a SUM File + + Hasher supports two backend commands: generic SUM file `import` and faster + but less consistent `stickyimport`. + +rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM +[--checkers 4] + + + Instead of SHA1 it can be any hash supported by the remote. The last argument + can point to either a local or an `other-remote:path` text file in SUM format. + The command will parse the SUM file, then walk down the path given by the + first argument, snapshot current fingerprints and fill in the cache entries + correspondingly. + - Paths in the SUM file are treated as relative to `hasher:dir/subdir`. + - The command will **not** check that supplied values are correct. + You **must know** what you are doing. + - This is a one-time action. The SUM file will not get "attached" to the + remote. Cache entries can still be overwritten later, should the object's + fingerprint change. + - The tree walk can take long depending on the tree size. You can increase + `--checkers` to make it faster. Or use `stickyimport` if you don't care + about fingerprints and consistency. + +rclone backend stickyimport hasher:path/to/data sha1 +remote:/path/to/sum.sha1 + + + `stickyimport` is similar to `import` but works much faster because it + does not need to stat existing files and skips initial tree walk. + Instead of binding cache entries to file fingerprints it creates _sticky_ + entries bound to the file name alone ignoring size, modification time etc. + Such hash entries can be replaced only by `purge`, `delete`, `backend drop` + or by full re-read/re-write of the files. + + ## Configuration reference + + + ### Standard options + + Here are the Standard options specific to hasher (Better checksums for other remotes). + + #### --hasher-remote + + Remote to cache checksums for (e.g. myRemote:path). + + Properties: + + - Config: remote + - Env Var: RCLONE_HASHER_REMOTE + - Type: string + - Required: true + + #### --hasher-hashes - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> Hasher1 - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Handle checksums for other remotes - \ "hasher" - [snip] - Storage> hasher - Remote to cache checksums for, like myremote:mypath. - Enter a string value. Press Enter for the default (""). - remote> myRemote:path Comma separated list of supported checksum types. - Enter a string value. Press Enter for the default ("md5,sha1"). - hashsums> md5 - Maximum time to keep checksums in cache. 0 = no cache, off = cache forever. - max_age> off - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - -------------------- - [Hasher1] - type = hasher - remote = myRemote:path - hashsums = md5 - max_age = off - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y -Manual configuration + Properties: -Run rclone config path to see the path of current active config file, -usually YOURHOME/.config/rclone/rclone.conf. Open it in your favorite -text editor, find section for the base remote and create new section for -hasher like in the following examples: + - Config: hashes + - Env Var: RCLONE_HASHER_HASHES + - Type: CommaSepList + - Default: md5,sha1 - [Hasher1] - type = hasher - remote = myRemote:path - hashes = md5 - max_age = off + #### --hasher-max-age - [Hasher2] - type = hasher - remote = /local/path - hashes = dropbox,sha1 - max_age = 24h + Maximum time to keep checksums in cache (0 = no cache, off = cache forever). -Hasher takes basically the following parameters: - remote is required, - -hashes is a comma separated list of supported checksums (by default -md5,sha1), - max_age - maximum time to keep a checksum value in the -cache, 0 will disable caching completely, off will cache "forever" (that -is until the files get changed). + Properties: -Make sure the remote has : (colon) in. If you specify the remote without -a colon then rclone will use a local directory of that name. So if you -use a remote of /local/path then rclone will handle hashes for that -directory. If you use remote = name literally then rclone will put files -in a directory called name located under current directory. + - Config: max_age + - Env Var: RCLONE_HASHER_MAX_AGE + - Type: Duration + - Default: off -Usage + ### Advanced options -Basic operations + Here are the Advanced options specific to hasher (Better checksums for other remotes). -Now you can use it as Hasher2:subdir/file instead of base remote. Hasher -will transparently update cache with new checksums when a file is fully -read or overwritten, like: + #### --hasher-auto-size - rclone copy External:path/file Hasher:dest/path + Auto-update checksum for files smaller than this size (disabled by default). - rclone cat Hasher:path/to/file > /dev/null + Properties: -The way to refresh all cached checksums (even unsupported by the base -backend) for a subtree is to re-download all files in the subtree. For -example, use hashsum --download using any supported hashsum on the -command line (we just care to re-read): + - Config: auto_size + - Env Var: RCLONE_HASHER_AUTO_SIZE + - Type: SizeSuffix + - Default: 0 - rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null + ### Metadata - rclone backend dump Hasher:path/to/subtree + Any metadata supported by the underlying remote is read and written. -You can print or drop hashsum cache using custom backend commands: + See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - rclone backend dump Hasher:dir/subdir + ## Backend commands - rclone backend drop Hasher: + Here are the commands specific to the hasher backend. -Pre-Seed from a SUM File + Run them with -Hasher supports two backend commands: generic SUM file import and faster -but less consistent stickyimport. + rclone backend COMMAND remote: - rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4] + The help below will explain what arguments each command takes. -Instead of SHA1 it can be any hash supported by the remote. The last -argument can point to either a local or an other-remote:path text file -in SUM format. The command will parse the SUM file, then walk down the -path given by the first argument, snapshot current fingerprints and fill -in the cache entries correspondingly. - Paths in the SUM file are -treated as relative to hasher:dir/subdir. - The command will not check -that supplied values are correct. You must know what you are doing. - -This is a one-time action. The SUM file will not get "attached" to the -remote. Cache entries can still be overwritten later, should the -object's fingerprint change. - The tree walk can take long depending on -the tree size. You can increase --checkers to make it faster. Or use -stickyimport if you don't care about fingerprints and consistency. + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. - rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1 + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). -stickyimport is similar to import but works much faster because it does -not need to stat existing files and skips initial tree walk. Instead of -binding cache entries to file fingerprints it creates sticky entries -bound to the file name alone ignoring size, modification time etc. Such -hash entries can be replaced only by purge, delete, backend drop or by -full re-read/re-write of the files. + ### drop -Configuration reference + Drop cache -Standard options + rclone backend drop remote: [options] [+] -Here are the Standard options specific to hasher (Better checksums for -other remotes). + Completely drop checksum cache. + Usage Example: + rclone backend drop hasher: ---hasher-remote -Remote to cache checksums for (e.g. myRemote:path). + ### dump -Properties: + Dump the database -- Config: remote -- Env Var: RCLONE_HASHER_REMOTE -- Type: string -- Required: true + rclone backend dump remote: [options] [+] ---hasher-hashes + Dump cache records covered by the current remote -Comma separated list of supported checksum types. + ### fulldump -Properties: + Full dump of the database -- Config: hashes -- Env Var: RCLONE_HASHER_HASHES -- Type: CommaSepList -- Default: md5,sha1 + rclone backend fulldump remote: [options] [+] ---hasher-max-age + Dump all cache records in the database -Maximum time to keep checksums in cache (0 = no cache, off = cache -forever). + ### import -Properties: + Import a SUM file -- Config: max_age -- Env Var: RCLONE_HASHER_MAX_AGE -- Type: Duration -- Default: off + rclone backend import remote: [options] [+] -Advanced options + Amend hash cache from a SUM file and bind checksums to files by size/time. + Usage Example: + rclone backend import hasher:subdir md5 /path/to/sum.md5 -Here are the Advanced options specific to hasher (Better checksums for -other remotes). ---hasher-auto-size + ### stickyimport -Auto-update checksum for files smaller than this size (disabled by -default). + Perform fast import of a SUM file -Properties: + rclone backend stickyimport remote: [options] [+] -- Config: auto_size -- Env Var: RCLONE_HASHER_AUTO_SIZE -- Type: SizeSuffix -- Default: 0 + Fill hash cache from a SUM file without verifying file fingerprints. + Usage Example: + rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 -Metadata -Any metadata supported by the underlying remote is read and written. -See the metadata docs for more info. -Backend commands + ## Implementation details (advanced) -Here are the commands specific to the hasher backend. + This section explains how various rclone operations work on a hasher remote. -Run them with + **Disclaimer. This section describes current implementation which can + change in future rclone versions!.** - rclone backend COMMAND remote: + ### Hashsum command -The help below will explain what arguments each command takes. + The `rclone hashsum` (or `md5sum` or `sha1sum`) command will: -See the backend command for more info on how to pass options and -arguments. + 1. if requested hash is supported by lower level, just pass it. + 2. if object size is below `auto_size` then download object and calculate + _requested_ hashes on the fly. + 3. if unsupported and the size is big enough, build object `fingerprint` + (including size, modtime if supported, first-found _other_ hash if any). + 4. if the strict match is found in cache for the requested remote, return + the stored hash. + 5. if remote found but fingerprint mismatched, then purge the entry and + proceed to step 6. + 6. if remote not found or had no requested hash type or after step 5: + download object, calculate all _supported_ hashes on the fly and store + in cache; return requested hash. -These can be run on a running backend using the rc command -backend/command. + ### Other operations -drop + - whenever a file is uploaded or downloaded **in full**, capture the stream + to calculate all supported hashes on the fly and update database + - server-side `move` will update keys of existing cache entries + - `deletefile` will remove a single cache entry + - `purge` will remove all cache entries under the purged path -Drop cache + Note that setting `max_age = 0` will disable checksum caching completely. - rclone backend drop remote: [options] [+] + If you set `max_age = off`, checksums in cache will never age, unless you + fully rewrite or delete the file. -Completely drop checksum cache. Usage Example: rclone backend drop -hasher: + ### Cache storage -dump + Cached checksums are stored as `bolt` database files under rclone cache + directory, usually `~/.cache/rclone/kv/`. Databases are maintained + one per _base_ backend, named like `BaseRemote~hasher.bolt`. + Checksums for multiple `alias`-es into a single base backend + will be stored in the single database. All local paths are treated as + aliases into the `local` backend (unless encrypted or chunked) and stored + in `~/.cache/rclone/kv/local~hasher.bolt`. + Databases can be shared between multiple rclone processes. -Dump the database + # HDFS - rclone backend dump remote: [options] [+] + [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a + distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework. -Dump cache records covered by the current remote + Paths are specified as `remote:` or `remote:path/to/dir`. -fulldump + ## Configuration -Full dump of the database + Here is an example of how to make a remote called `remote`. First run: - rclone backend fulldump remote: [options] [+] + rclone config -Dump all cache records in the database + This will guide you through an interactive setup process: -import +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [skip] XX / Hadoop +distributed file system  "hdfs" [skip] Storage> hdfs ** See help for +hdfs backend at: https://rclone.org/hdfs/ ** -Import a SUM file +hadoop name node and port Enter a string value. Press Enter for the +default (""). Choose a number from below, or type in your own value 1 / +Connect to host namenode at port 8020  "namenode:8020" namenode> +namenode.hadoop:8020 hadoop user name Enter a string value. Press Enter +for the default (""). Choose a number from below, or type in your own +value 1 / Connect to hdfs as root  "root" username> root Edit advanced +config? (y/n) y) Yes n) No (default) y/n> n Remote config +-------------------- [remote] type = hdfs namenode = +namenode.hadoop:8020 username = root -------------------- y) Yes this is +OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current +remotes: - rclone backend import remote: [options] [+] +Name Type ==== ==== hadoop hdfs -Amend hash cache from a SUM file and bind checksums to files by -size/time. Usage Example: rclone backend import hasher:subdir md5 -/path/to/sum.md5 +e) Edit existing remote +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> q -stickyimport -Perform fast import of a SUM file + This remote is called `remote` and can now be used like this - rclone backend stickyimport remote: [options] [+] + See all the top level directories -Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: rclone backend stickyimport hasher:subdir md5 -remote:path/to/sum.md5 + rclone lsd remote: -Implementation details (advanced) + List the contents of a directory -This section explains how various rclone operations work on a hasher -remote. + rclone ls remote:directory -Disclaimer. This section describes current implementation which can -change in future rclone versions!. + Sync the remote `directory` to `/home/local/directory`, deleting any excess files. -Hashsum command + rclone sync --interactive remote:directory /home/local/directory -The rclone hashsum (or md5sum or sha1sum) command will: + ### Setting up your own HDFS instance for testing -1. if requested hash is supported by lower level, just pass it. -2. if object size is below auto_size then download object and calculate - requested hashes on the fly. -3. if unsupported and the size is big enough, build object fingerprint - (including size, modtime if supported, first-found other hash if - any). -4. if the strict match is found in cache for the requested remote, - return the stored hash. -5. if remote found but fingerprint mismatched, then purge the entry and - proceed to step 6. -6. if remote not found or had no requested hash type or after step 5: - download object, calculate all supported hashes on the fly and store - in cache; return requested hash. + You may start with a [manual setup](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) + or use the docker image from the tests: -Other operations + If you want to build the docker image -- whenever a file is uploaded or downloaded in full, capture the - stream to calculate all supported hashes on the fly and update - database -- server-side move will update keys of existing cache entries -- deletefile will remove a single cache entry -- purge will remove all cache entries under the purged path +git clone https://github.com/rclone/rclone.git cd +rclone/fstest/testserver/images/test-hdfs docker build --rm -t +rclone/test-hdfs . -Note that setting max_age = 0 will disable checksum caching completely. -If you set max_age = off, checksums in cache will never age, unless you -fully rewrite or delete the file. + Or you can just use the latest one pushed -Cache storage +docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p +127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs -Cached checksums are stored as bolt database files under rclone cache -directory, usually ~/.cache/rclone/kv/. Databases are maintained one per -base backend, named like BaseRemote~hasher.bolt. Checksums for multiple -alias-es into a single base backend will be stored in the single -database. All local paths are treated as aliases into the local backend -(unless encrypted or chunked) and stored in -~/.cache/rclone/kv/local~hasher.bolt. Databases can be shared between -multiple rclone processes. -HDFS + **NB** it need few seconds to startup. -HDFS is a distributed file-system, part of the Apache Hadoop framework. + For this docker image the remote needs to be configured like this: -Paths are specified as remote: or remote:path/to/dir. +[remote] type = hdfs namenode = 127.0.0.1:8020 username = root -Configuration -Here is an example of how to make a remote called remote. First run: + You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data + uploaded will be lost.) - rclone config + ### Modified time -This will guide you through an interactive setup process: + Time accurate to 1 second is stored. - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [skip] - XX / Hadoop distributed file system - \ "hdfs" - [skip] - Storage> hdfs - ** See help for hdfs backend at: https://rclone.org/hdfs/ ** + ### Checksum - hadoop name node and port - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Connect to host namenode at port 8020 - \ "namenode:8020" - namenode> namenode.hadoop:8020 - hadoop user name - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Connect to hdfs as root - \ "root" - username> root - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [remote] - type = hdfs - namenode = namenode.hadoop:8020 - username = root - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - Current remotes: + No checksums are implemented. - Name Type - ==== ==== - hadoop hdfs + ### Usage information - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> q + You can use the `rclone about remote:` command which will display filesystem size and current usage. -This remote is called remote and can now be used like this + ### Restricted filename characters -See all the top level directories + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: - rclone lsd remote: + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | : | 0x3A | : | -List the contents of a directory + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8). - rclone ls remote:directory -Sync the remote directory to /home/local/directory, deleting any excess -files. + ### Standard options - rclone sync --interactive remote:directory /home/local/directory + Here are the Standard options specific to hdfs (Hadoop distributed file system). -Setting up your own HDFS instance for testing + #### --hdfs-namenode -You may start with a manual setup or use the docker image from the -tests: + Hadoop name node and port. -If you want to build the docker image + E.g. "namenode:8020" to connect to host namenode at port 8020. - git clone https://github.com/rclone/rclone.git - cd rclone/fstest/testserver/images/test-hdfs - docker build --rm -t rclone/test-hdfs . + Properties: -Or you can just use the latest one pushed + - Config: namenode + - Env Var: RCLONE_HDFS_NAMENODE + - Type: string + - Required: true - docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs + #### --hdfs-username -NB it need few seconds to startup. + Hadoop user name. -For this docker image the remote needs to be configured like this: + Properties: - [remote] - type = hdfs - namenode = 127.0.0.1:8020 - username = root + - Config: username + - Env Var: RCLONE_HDFS_USERNAME + - Type: string + - Required: false + - Examples: + - "root" + - Connect to hdfs as root. -You can stop this image with docker kill rclone-hdfs (NB it does not use -volumes, so all data uploaded will be lost.) + ### Advanced options -Modified time + Here are the Advanced options specific to hdfs (Hadoop distributed file system). -Time accurate to 1 second is stored. + #### --hdfs-service-principal-name -Checksum + Kerberos service principal name for the namenode. -No checksums are implemented. + Enables KERBEROS authentication. Specifies the Service Principal Name + (SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\" + for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'. -Usage information + Properties: -You can use the rclone about remote: command which will display -filesystem size and current usage. + - Config: service_principal_name + - Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME + - Type: string + - Required: false -Restricted filename characters + #### --hdfs-data-transfer-protection -In addition to the default restricted characters set the following -characters are also replaced: + Kerberos data transfer protection: authentication|integrity|privacy. - Character Value Replacement - ----------- ------- ------------- - : 0x3A : + Specifies whether or not authentication, data signature integrity + checks, and wire encryption are required when communicating with + the datanodes. Possible values are 'authentication', 'integrity' + and 'privacy'. Used only with KERBEROS enabled. -Invalid UTF-8 bytes will also be replaced. + Properties: -Standard options + - Config: data_transfer_protection + - Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION + - Type: string + - Required: false + - Examples: + - "privacy" + - Ensure authentication, integrity and encryption enabled. -Here are the Standard options specific to hdfs (Hadoop distributed file -system). + #### --hdfs-encoding ---hdfs-namenode + The encoding for the backend. -Hadoop name node and port. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -E.g. "namenode:8020" to connect to host namenode at port 8020. + Properties: -Properties: + - Config: encoding + - Env Var: RCLONE_HDFS_ENCODING + - Type: MultiEncoder + - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot -- Config: namenode -- Env Var: RCLONE_HDFS_NAMENODE -- Type: string -- Required: true ---hdfs-username -Hadoop user name. + ## Limitations -Properties: + - No server-side `Move` or `DirMove`. + - Checksums not implemented. -- Config: username -- Env Var: RCLONE_HDFS_USERNAME -- Type: string -- Required: false -- Examples: - - "root" - - Connect to hdfs as root. + # HiDrive -Advanced options + Paths are specified as `remote:path` -Here are the Advanced options specific to hdfs (Hadoop distributed file -system). + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ---hdfs-service-principal-name + The initial setup for hidrive involves getting a token from HiDrive + which you need to do in your browser. + `rclone config` walks you through it. -Kerberos service principal name for the namenode. + ## Configuration -Enables KERBEROS authentication. Specifies the Service Principal Name -(SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for -namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'. + Here is an example of how to make a remote called `remote`. First run: -Properties: + rclone config -- Config: service_principal_name -- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME -- Type: string -- Required: false + This will guide you through an interactive setup process: ---hdfs-data-transfer-protection +No remotes found - make a new one n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / HiDrive  "hidrive" [snip] Storage> hidrive OAuth Client Id - Leave +blank normally. client_id> OAuth Client Secret - Leave blank normally. +client_secret> Access permissions that rclone should use when requesting +access from HiDrive. Leave blank normally. scope_access> Edit advanced +config? y/n> n Use web browser to automatically authenticate rclone with +remote? * Say Y if the machine running rclone has a web browser you can +use * Say N if running rclone on a (remote) machine without web browser +access If not sure try Y. If Y failed, try N. y/n> y If your browser +doesn't open automatically go to the following link: +http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx Log in and +authorize rclone for access Waiting for code... Got code +-------------------- [remote] type = hidrive token = +{"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"} +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y -Kerberos data transfer protection: authentication|integrity|privacy. -Specifies whether or not authentication, data signature integrity -checks, and wire encryption are required when communicating with the -datanodes. Possible values are 'authentication', 'integrity' and -'privacy'. Used only with KERBEROS enabled. + **You should be aware that OAuth-tokens can be used to access your account + and hence should not be shared with other persons.** + See the [below section](#keeping-your-tokens-safe) for more information. -Properties: + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. -- Config: data_transfer_protection -- Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION -- Type: string -- Required: false -- Examples: - - "privacy" - - Ensure authentication, integrity and encryption enabled. + Note that rclone runs a webserver on your local machine to collect the + token as returned from HiDrive. This only runs from the moment it opens + your browser to the moment you get back the verification code. + The webserver runs on `http://127.0.0.1:53682/`. + If local port `53682` is protected by a firewall you may need to temporarily + unblock the firewall to complete authorization. ---hdfs-encoding + Once configured you can then use `rclone` like this, -The encoding for the backend. + List directories in top level of your HiDrive root folder -See the encoding section in the overview for more info. + rclone lsd remote: -Properties: + List all the files in your HiDrive filesystem -- Config: encoding -- Env Var: RCLONE_HDFS_ENCODING -- Type: MultiEncoder -- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot + rclone ls remote: -Limitations + To copy a local directory to a HiDrive directory called backup -- No server-side Move or DirMove. -- Checksums not implemented. + rclone copy /home/source remote:backup -HiDrive + ### Keeping your tokens safe -Paths are specified as remote:path + Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. + Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. + Therefore you should make sure no one else can access your configuration. -Paths may be as deep as required, e.g. remote:directory/subdirectory. + It is possible to encrypt rclone's configuration file. + You can find information on securing your configuration file by viewing the [configuration encryption docs](https://rclone.org/docs/#configuration-encryption). -The initial setup for hidrive involves getting a token from HiDrive -which you need to do in your browser. rclone config walks you through -it. + ### Invalid refresh token -Configuration + As can be verified [here](https://developer.hidrive.com/basics-flows/), + each `refresh_token` (for Native Applications) is valid for 60 days. + If used to access HiDrivei, its validity will be automatically extended. -Here is an example of how to make a remote called remote. First run: + This means that if you - rclone config + * Don't use the HiDrive remote for 60 days -This will guide you through an interactive setup process: + then rclone will return an error which includes a text + that implies the refresh token is *invalid* or *expired*. + + To fix this you will need to authorize rclone to access your HiDrive account again. + + Using + + rclone config reconnect remote: + + the process is very similar to the process of initial setup exemplified before. + + ### Modified time and hashes + + HiDrive allows modification times to be set on objects accurate to 1 second. + + HiDrive supports [its own hash type](https://static.hidrive.com/dev/0001) + which is used to verify the integrity of file contents after successful transfers. + + ### Restricted filename characters + + HiDrive cannot store files or folders that include + `/` (0x2F) or null-bytes (0x00) in their name. + Any other characters can be used in the names of files or folders. + Additionally, files or folders cannot be named either of the following: `.` or `..` + + Therefore rclone will automatically replace these characters, + if files or folders are stored or accessed with such names. + + You can read about how this filename encoding works in general + [here](overview/#restricted-filenames). + + Keep in mind that HiDrive only supports file or folder names + with a length of 255 characters or less. + + ### Transfers + + HiDrive limits file sizes per single request to a maximum of 2 GiB. + To allow storage of larger files and allow for better upload performance, + the hidrive backend will use a chunked transfer for files larger than 96 MiB. + Rclone will upload multiple parts/chunks of the file at the same time. + Chunks in the process of being uploaded are buffered in memory, + so you may want to restrict this behaviour on systems with limited resources. + + You can customize this behaviour using the following options: + + * `chunk_size`: size of file parts + * `upload_cutoff`: files larger or equal to this in size will use a chunked transfer + * `upload_concurrency`: number of file-parts to upload at the same time + + See the below section about configuration options for more details. + + ### Root folder + + You can set the root folder for rclone. + This is the directory that rclone considers to be the root of your HiDrive. + + Usually, you will leave this blank, and rclone will use the root of the account. + + However, you can set this to restrict rclone to a specific folder hierarchy. + + This works by prepending the contents of the `root_prefix` option + to any paths accessed by rclone. + For example, the following two ways to access the home directory are equivalent: + + rclone lsd --hidrive-root-prefix="/users/test/" remote:path + + rclone lsd remote:/users/test/path + + See the below section about configuration options for more details. + + ### Directory member count + + By default, rclone will know the number of directory members contained in a directory. + For example, `rclone lsd` uses this information. + + The acquisition of this information will result in additional time costs for HiDrive's API. + When dealing with large directory structures, it may be desirable to circumvent this time cost, + especially when this information is not explicitly needed. + For this, the `disable_fetching_member_count` option can be used. + + See the below section about configuration options for more details. + + + ### Standard options + + Here are the Standard options specific to hidrive (HiDrive). + + #### --hidrive-client-id + + OAuth Client Id. - No remotes found - make a new one - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / HiDrive - \ "hidrive" - [snip] - Storage> hidrive - OAuth Client Id - Leave blank normally. - client_id> - OAuth Client Secret - Leave blank normally. - client_secret> - Access permissions that rclone should use when requesting access from HiDrive. Leave blank normally. - scope_access> - Edit advanced config? - y/n> n - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - type = hidrive - token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"} - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -You should be aware that OAuth-tokens can be used to access your account -and hence should not be shared with other persons. See the below section -for more information. + Properties: -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + - Config: client_id + - Env Var: RCLONE_HIDRIVE_CLIENT_ID + - Type: string + - Required: false -Note that rclone runs a webserver on your local machine to collect the -token as returned from HiDrive. This only runs from the moment it opens -your browser to the moment you get back the verification code. The -webserver runs on http://127.0.0.1:53682/. If local port 53682 is -protected by a firewall you may need to temporarily unblock the firewall -to complete authorization. + #### --hidrive-client-secret -Once configured you can then use rclone like this, + OAuth Client Secret. -List directories in top level of your HiDrive root folder + Leave blank normally. - rclone lsd remote: + Properties: -List all the files in your HiDrive filesystem + - Config: client_secret + - Env Var: RCLONE_HIDRIVE_CLIENT_SECRET + - Type: string + - Required: false - rclone ls remote: + #### --hidrive-scope-access -To copy a local directory to a HiDrive directory called backup + Access permissions that rclone should use when requesting access from HiDrive. - rclone copy /home/source remote:backup + Properties: -Keeping your tokens safe + - Config: scope_access + - Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS + - Type: string + - Default: "rw" + - Examples: + - "rw" + - Read and write access to resources. + - "ro" + - Read-only access to resources. -Any OAuth-tokens will be stored by rclone in the remote's configuration -file as unencrypted text. Anyone can use a valid refresh-token to access -your HiDrive filesystem without knowing your password. Therefore you -should make sure no one else can access your configuration. + ### Advanced options -It is possible to encrypt rclone's configuration file. You can find -information on securing your configuration file by viewing the -configuration encryption docs. + Here are the Advanced options specific to hidrive (HiDrive). -Invalid refresh token + #### --hidrive-token -As can be verified here, each refresh_token (for Native Applications) is -valid for 60 days. If used to access HiDrivei, its validity will be -automatically extended. + OAuth Access Token as a JSON blob. -This means that if you + Properties: -- Don't use the HiDrive remote for 60 days + - Config: token + - Env Var: RCLONE_HIDRIVE_TOKEN + - Type: string + - Required: false -then rclone will return an error which includes a text that implies the -refresh token is invalid or expired. + #### --hidrive-auth-url -To fix this you will need to authorize rclone to access your HiDrive -account again. + Auth server URL. -Using + Leave blank to use the provider defaults. - rclone config reconnect remote: + Properties: -the process is very similar to the process of initial setup exemplified -before. + - Config: auth_url + - Env Var: RCLONE_HIDRIVE_AUTH_URL + - Type: string + - Required: false -Modified time and hashes + #### --hidrive-token-url -HiDrive allows modification times to be set on objects accurate to 1 -second. + Token server url. -HiDrive supports its own hash type which is used to verify the integrity -of file contents after successful transfers. + Leave blank to use the provider defaults. -Restricted filename characters + Properties: -HiDrive cannot store files or folders that include / (0x2F) or -null-bytes (0x00) in their name. Any other characters can be used in the -names of files or folders. Additionally, files or folders cannot be -named either of the following: . or .. + - Config: token_url + - Env Var: RCLONE_HIDRIVE_TOKEN_URL + - Type: string + - Required: false -Therefore rclone will automatically replace these characters, if files -or folders are stored or accessed with such names. + #### --hidrive-scope-role -You can read about how this filename encoding works in general here. + User-level that rclone should use when requesting access from HiDrive. -Keep in mind that HiDrive only supports file or folder names with a -length of 255 characters or less. + Properties: -Transfers + - Config: scope_role + - Env Var: RCLONE_HIDRIVE_SCOPE_ROLE + - Type: string + - Default: "user" + - Examples: + - "user" + - User-level access to management permissions. + - This will be sufficient in most cases. + - "admin" + - Extensive access to management permissions. + - "owner" + - Full access to management permissions. -HiDrive limits file sizes per single request to a maximum of 2 GiB. To -allow storage of larger files and allow for better upload performance, -the hidrive backend will use a chunked transfer for files larger than 96 -MiB. Rclone will upload multiple parts/chunks of the file at the same -time. Chunks in the process of being uploaded are buffered in memory, so -you may want to restrict this behaviour on systems with limited -resources. + #### --hidrive-root-prefix -You can customize this behaviour using the following options: + The root/parent folder for all paths. -- chunk_size: size of file parts -- upload_cutoff: files larger or equal to this in size will use a - chunked transfer -- upload_concurrency: number of file-parts to upload at the same time + Fill in to use the specified folder as the parent for all paths given to the remote. + This way rclone can use any folder as its starting point. -See the below section about configuration options for more details. + Properties: -Root folder + - Config: root_prefix + - Env Var: RCLONE_HIDRIVE_ROOT_PREFIX + - Type: string + - Default: "/" + - Examples: + - "/" + - The topmost directory accessible by rclone. + - This will be equivalent with "root" if rclone uses a regular HiDrive user account. + - "root" + - The topmost directory of the HiDrive user account + - "" + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". -You can set the root folder for rclone. This is the directory that -rclone considers to be the root of your HiDrive. + #### --hidrive-endpoint -Usually, you will leave this blank, and rclone will use the root of the -account. + Endpoint for the service. -However, you can set this to restrict rclone to a specific folder -hierarchy. + This is the URL that API-calls will be made to. -This works by prepending the contents of the root_prefix option to any -paths accessed by rclone. For example, the following two ways to access -the home directory are equivalent: + Properties: - rclone lsd --hidrive-root-prefix="/users/test/" remote:path + - Config: endpoint + - Env Var: RCLONE_HIDRIVE_ENDPOINT + - Type: string + - Default: "https://api.hidrive.strato.com/2.1" - rclone lsd remote:/users/test/path + #### --hidrive-disable-fetching-member-count -See the below section about configuration options for more details. + Do not fetch number of objects in directories unless it is absolutely necessary. -Directory member count + Requests may be faster if the number of objects in subdirectories is not fetched. -By default, rclone will know the number of directory members contained -in a directory. For example, rclone lsd uses this information. + Properties: -The acquisition of this information will result in additional time costs -for HiDrive's API. When dealing with large directory structures, it may -be desirable to circumvent this time cost, especially when this -information is not explicitly needed. For this, the -disable_fetching_member_count option can be used. + - Config: disable_fetching_member_count + - Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT + - Type: bool + - Default: false -See the below section about configuration options for more details. + #### --hidrive-chunk-size -Standard options + Chunksize for chunked uploads. -Here are the Standard options specific to hidrive (HiDrive). + Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size. ---hidrive-client-id + The upper limit for this is 2147483647 bytes (about 2.000Gi). + That is the maximum amount of bytes a single upload-operation will support. + Setting this above the upper limit or to a negative value will cause uploads to fail. -OAuth Client Id. + Setting this to larger values may increase the upload speed at the cost of using more memory. + It can be set to smaller values smaller to save on memory. -Leave blank normally. + Properties: -Properties: + - Config: chunk_size + - Env Var: RCLONE_HIDRIVE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 48Mi -- Config: client_id -- Env Var: RCLONE_HIDRIVE_CLIENT_ID -- Type: string -- Required: false + #### --hidrive-upload-cutoff ---hidrive-client-secret + Cutoff/Threshold for chunked uploads. -OAuth Client Secret. + Any files larger than this will be uploaded in chunks of the configured chunksize. -Leave blank normally. + The upper limit for this is 2147483647 bytes (about 2.000Gi). + That is the maximum amount of bytes a single upload-operation will support. + Setting this above the upper limit will cause uploads to fail. -Properties: + Properties: -- Config: client_secret -- Env Var: RCLONE_HIDRIVE_CLIENT_SECRET -- Type: string -- Required: false + - Config: upload_cutoff + - Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 96Mi ---hidrive-scope-access + #### --hidrive-upload-concurrency -Access permissions that rclone should use when requesting access from -HiDrive. + Concurrency for chunked uploads. -Properties: + This is the upper limit for how many transfers for the same file are running concurrently. + Setting this above to a value smaller than 1 will cause uploads to deadlock. -- Config: scope_access -- Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS -- Type: string -- Default: "rw" -- Examples: - - "rw" - - Read and write access to resources. - - "ro" - - Read-only access to resources. + If you are uploading small numbers of large files over high-speed links + and these uploads do not fully utilize your bandwidth, then increasing + this may help to speed up the transfers. -Advanced options + Properties: -Here are the Advanced options specific to hidrive (HiDrive). + - Config: upload_concurrency + - Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY + - Type: int + - Default: 4 ---hidrive-token + #### --hidrive-encoding -OAuth Access Token as a JSON blob. + The encoding for the backend. -Properties: + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -- Config: token -- Env Var: RCLONE_HIDRIVE_TOKEN -- Type: string -- Required: false + Properties: ---hidrive-auth-url + - Config: encoding + - Env Var: RCLONE_HIDRIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,Dot -Auth server URL. -Leave blank to use the provider defaults. -Properties: + ## Limitations -- Config: auth_url -- Env Var: RCLONE_HIDRIVE_AUTH_URL -- Type: string -- Required: false + ### Symbolic links ---hidrive-token-url + HiDrive is able to store symbolic links (*symlinks*) by design, + for example, when unpacked from a zip archive. -Token server url. + There exists no direct mechanism to manage native symlinks in remotes. + As such this implementation has chosen to ignore any native symlinks present in the remote. + rclone will not be able to access or show any symlinks stored in the hidrive-remote. + This means symlinks cannot be individually removed, copied, or moved, + except when removing, copying, or moving the parent folder. -Leave blank to use the provider defaults. + *This does not affect the `.rclonelink`-files + that rclone uses to encode and store symbolic links.* -Properties: + ### Sparse files -- Config: token_url -- Env Var: RCLONE_HIDRIVE_TOKEN_URL -- Type: string -- Required: false + It is possible to store sparse files in HiDrive. ---hidrive-scope-role + Note that copying a sparse file will expand the holes + into null-byte (0x00) regions that will then consume disk space. + Likewise, when downloading a sparse file, + the resulting file will have null-byte regions in the place of file holes. -User-level that rclone should use when requesting access from HiDrive. + # HTTP -Properties: + The HTTP remote is a read only remote for reading files of a + webserver. The webserver should provide file listings which rclone + will read and turn into a remote. This has been tested with common + webservers such as Apache/Nginx/Caddy and will likely work with file + listings from most web servers. (If it doesn't then please file an + issue, or send a pull request!) -- Config: scope_role -- Env Var: RCLONE_HIDRIVE_SCOPE_ROLE -- Type: string -- Default: "user" -- Examples: - - "user" - - User-level access to management permissions. - - This will be sufficient in most cases. - - "admin" - - Extensive access to management permissions. - - "owner" - - Full access to management permissions. + Paths are specified as `remote:` or `remote:path`. ---hidrive-root-prefix + The `remote:` represents the configured [url](#http-url), and any path following + it will be resolved relative to this url, according to the URL standard. This + means with remote url `https://beta.rclone.org/branch` and path `fix`, the + resolved URL will be `https://beta.rclone.org/branch/fix`, while with path + `/fix` the resolved URL will be `https://beta.rclone.org/fix` as the absolute + path is resolved from the root of the domain. -The root/parent folder for all paths. + If the path following the `remote:` ends with `/` it will be assumed to point + to a directory. If the path does not end with `/`, then a HEAD request is sent + and the response used to decide if it it is treated as a file or a directory + (run with `-vv` to see details). When [--http-no-head](#http-no-head) is + specified, a path without ending `/` is always assumed to be a file. If rclone + incorrectly assumes the path is a file, the solution is to specify the path with + ending `/`. When you know the path is a directory, ending it with `/` is always + better as it avoids the initial HEAD request. -Fill in to use the specified folder as the parent for all paths given to -the remote. This way rclone can use any folder as its starting point. + To just download a single file it is easier to use + [copyurl](https://rclone.org/commands/rclone_copyurl/). -Properties: + ## Configuration -- Config: root_prefix -- Env Var: RCLONE_HIDRIVE_ROOT_PREFIX -- Type: string -- Default: "/" -- Examples: - - "/" - - The topmost directory accessible by rclone. - - This will be equivalent with "root" if rclone uses a regular - HiDrive user account. - - "root" - - The topmost directory of the HiDrive user account - - "" - - This specifies that there is no root-prefix for your paths. - - When using this you will always need to specify paths to - this remote with a valid parent e.g. "remote:/path/to/dir" - or "remote:root/path/to/dir". + Here is an example of how to make a remote called `remote`. First + run: ---hidrive-endpoint + rclone config -Endpoint for the service. + This will guide you through an interactive setup process: -This is the URL that API-calls will be made to. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / HTTP  "http" [snip] Storage> http URL of http host to connect to +Choose a number from below, or type in your own value 1 / Connect to +example.com  "https://example.com" url> https://beta.rclone.org Remote +config -------------------- [remote] url = https://beta.rclone.org +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y Current remotes: -Properties: +Name Type ==== ==== remote http -- Config: endpoint -- Env Var: RCLONE_HIDRIVE_ENDPOINT -- Type: string -- Default: "https://api.hidrive.strato.com/2.1" +e) Edit existing remote +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> q ---hidrive-disable-fetching-member-count -Do not fetch number of objects in directories unless it is absolutely -necessary. + This remote is called `remote` and can now be used like this -Requests may be faster if the number of objects in subdirectories is not -fetched. + See all the top level directories -Properties: + rclone lsd remote: -- Config: disable_fetching_member_count -- Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT -- Type: bool -- Default: false + List the contents of a directory ---hidrive-chunk-size + rclone ls remote:directory -Chunksize for chunked uploads. + Sync the remote `directory` to `/home/local/directory`, deleting any excess files. -Any files larger than the configured cutoff (or files of unknown size) -will be uploaded in chunks of this size. + rclone sync --interactive remote:directory /home/local/directory -The upper limit for this is 2147483647 bytes (about 2.000Gi). That is -the maximum amount of bytes a single upload-operation will support. -Setting this above the upper limit or to a negative value will cause -uploads to fail. + ### Read only -Setting this to larger values may increase the upload speed at the cost -of using more memory. It can be set to smaller values smaller to save on -memory. + This remote is read only - you can't upload files to an HTTP server. -Properties: + ### Modified time -- Config: chunk_size -- Env Var: RCLONE_HIDRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 48Mi + Most HTTP servers store time accurate to 1 second. ---hidrive-upload-cutoff + ### Checksum -Cutoff/Threshold for chunked uploads. + No checksums are stored. -Any files larger than this will be uploaded in chunks of the configured -chunksize. + ### Usage without a config file -The upper limit for this is 2147483647 bytes (about 2.000Gi). That is -the maximum amount of bytes a single upload-operation will support. -Setting this above the upper limit will cause uploads to fail. + Since the http remote only has one config parameter it is easy to use + without a config file: -Properties: + rclone lsd --http-url https://beta.rclone.org :http: -- Config: upload_cutoff -- Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 96Mi + or: ---hidrive-upload-concurrency + rclone lsd :http,url='https://beta.rclone.org': -Concurrency for chunked uploads. -This is the upper limit for how many transfers for the same file are -running concurrently. Setting this above to a value smaller than 1 will -cause uploads to deadlock. + ### Standard options -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. + Here are the Standard options specific to http (HTTP). -Properties: + #### --http-url -- Config: upload_concurrency -- Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY -- Type: int -- Default: 4 + URL of HTTP host to connect to. ---hidrive-encoding + E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password. -The encoding for the backend. + Properties: -See the encoding section in the overview for more info. + - Config: url + - Env Var: RCLONE_HTTP_URL + - Type: string + - Required: true -Properties: + ### Advanced options -- Config: encoding -- Env Var: RCLONE_HIDRIVE_ENCODING -- Type: MultiEncoder -- Default: Slash,Dot + Here are the Advanced options specific to http (HTTP). -Limitations + #### --http-headers -Symbolic links + Set HTTP headers for all transactions. -HiDrive is able to store symbolic links (symlinks) by design, for -example, when unpacked from a zip archive. + Use this to set additional HTTP headers for all transactions. -There exists no direct mechanism to manage native symlinks in remotes. -As such this implementation has chosen to ignore any native symlinks -present in the remote. rclone will not be able to access or show any -symlinks stored in the hidrive-remote. This means symlinks cannot be -individually removed, copied, or moved, except when removing, copying, -or moving the parent folder. + The input format is comma separated list of key,value pairs. Standard + [CSV encoding](https://godoc.org/encoding/csv) may be used. -This does not affect the .rclonelink-files that rclone uses to encode -and store symbolic links. + For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. -Sparse files + You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. -It is possible to store sparse files in HiDrive. + Properties: -Note that copying a sparse file will expand the holes into null-byte -(0x00) regions that will then consume disk space. Likewise, when -downloading a sparse file, the resulting file will have null-byte -regions in the place of file holes. + - Config: headers + - Env Var: RCLONE_HTTP_HEADERS + - Type: CommaSepList + - Default: -HTTP + #### --http-no-slash -The HTTP remote is a read only remote for reading files of a webserver. -The webserver should provide file listings which rclone will read and -turn into a remote. This has been tested with common webservers such as -Apache/Nginx/Caddy and will likely work with file listings from most web -servers. (If it doesn't then please file an issue, or send a pull -request!) + Set this if the site doesn't end directories with /. -Paths are specified as remote: or remote:path. + Use this if your target website does not use / on the end of + directories. -The remote: represents the configured url, and any path following it -will be resolved relative to this url, according to the URL standard. -This means with remote url https://beta.rclone.org/branch and path fix, -the resolved URL will be https://beta.rclone.org/branch/fix, while with -path /fix the resolved URL will be https://beta.rclone.org/fix as the -absolute path is resolved from the root of the domain. + A / on the end of a path is how rclone normally tells the difference + between files and directories. If this flag is set, then rclone will + treat all files with Content-Type: text/html as directories and read + URLs from them rather than downloading them. -If the path following the remote: ends with / it will be assumed to -point to a directory. If the path does not end with /, then a HEAD -request is sent and the response used to decide if it it is treated as a -file or a directory (run with -vv to see details). When --http-no-head -is specified, a path without ending / is always assumed to be a file. If -rclone incorrectly assumes the path is a file, the solution is to -specify the path with ending /. When you know the path is a directory, -ending it with / is always better as it avoids the initial HEAD request. + Note that this may cause rclone to confuse genuine HTML files with + directories. -To just download a single file it is easier to use copyurl. + Properties: -Configuration + - Config: no_slash + - Env Var: RCLONE_HTTP_NO_SLASH + - Type: bool + - Default: false -Here is an example of how to make a remote called remote. First run: + #### --http-no-head - rclone config + Don't use HEAD requests. -This will guide you through an interactive setup process: + HEAD requests are mainly used to find file sizes in dir listing. + If your site is being very slow to load then you can try this option. + Normally rclone does a HEAD request for each potential file in a + directory listing to: - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / HTTP - \ "http" - [snip] - Storage> http - URL of http host to connect to - Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "https://example.com" - url> https://beta.rclone.org - Remote config - -------------------- - [remote] - url = https://beta.rclone.org - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - Current remotes: + - find its size + - check it really exists + - check to see if it is a directory - Name Type - ==== ==== - remote http + If you set this option, rclone will not do the HEAD request. This will mean + that directory listings are much quicker, but rclone won't have the times or + sizes of any files, and some files that don't exist may be in the listing. - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> q + Properties: -This remote is called remote and can now be used like this + - Config: no_head + - Env Var: RCLONE_HTTP_NO_HEAD + - Type: bool + - Default: false -See all the top level directories - rclone lsd remote: -List the contents of a directory + ## Limitations - rclone ls remote:directory + `rclone about` is not supported by the HTTP backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. -Sync the remote directory to /home/local/directory, deleting any excess -files. + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - rclone sync --interactive remote:directory /home/local/directory + # Internet Archive -Read only + The Internet Archive backend utilizes Items on [archive.org](https://archive.org/) -This remote is read only - you can't upload files to an HTTP server. + Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses. -Modified time + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`. -Most HTTP servers store time accurate to 1 second. + Unlike S3, listing up all items uploaded by you isn't supported. -Checksum + Once you have made a remote, you can use it like this: -No checksums are stored. + Make a new item -Usage without a config file + rclone mkdir remote:item -Since the http remote only has one config parameter it is easy to use -without a config file: + List the contents of a item - rclone lsd --http-url https://beta.rclone.org :http: + rclone ls remote:item -or: + Sync `/home/local/directory` to the remote item, deleting any excess + files in the item. - rclone lsd :http,url='https://beta.rclone.org': + rclone sync --interactive /home/local/directory remote:item -Standard options + ## Notes + Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. + The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior. -Here are the Standard options specific to http (HTTP). + You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key. + By making it wait, rclone can do normal file comparison. + Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue. ---http-url + ## About metadata + This backend supports setting, updating and reading metadata of each file. + The metadata will appear as file metadata on Internet Archive. + However, some fields are reserved by both Internet Archive and rclone. -URL of HTTP host to connect to. + The following are reserved by Internet Archive: + - `name` + - `source` + - `size` + - `md5` + - `crc32` + - `sha1` + - `format` + - `old_version` + - `viruscheck` + - `summation` -E.g. "https://example.com", or "https://user:pass@example.com" to use a -username and password. + Trying to set values to these keys is ignored with a warning. + Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime. -Properties: + rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request. -- Config: url -- Env Var: RCLONE_HTTP_URL -- Type: string -- Required: true + If there are multiple values for a key, only the first one is returned. + This is a limitation of rclone, that supports one value per one key. + It can be triggered when you did a server-side copy. -Advanced options + Reading metadata will also provide custom (non-standard nor reserved) ones. -Here are the Advanced options specific to http (HTTP). + ## Filtering auto generated files ---http-headers + The Internet Archive automatically creates metadata files after + upload. These can cause problems when doing an `rclone sync` as rclone + will try, and fail, to delete them. These metadata files are not + changeable, as they are created by the Internet Archive automatically. -Set HTTP headers for all transactions. + These auto-created files can be excluded from the sync using [metadata + filtering](https://rclone.org/filtering/#metadata). -Use this to set additional HTTP headers for all transactions. + rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" -The input format is comma separated list of key,value pairs. Standard -CSV encoding may be used. + Which excludes from the sync any files which have the + `source=metadata` or `format=Metadata` flags which are added to + Internet Archive auto-created files. -For example, to set a Cookie use 'Cookie,name=value', or -'"Cookie","name=value"'. + ## Configuration -You can set multiple headers, e.g. -'"Cookie","name=value","Authorization","xxx"'. + Here is an example of making an internetarchive configuration. + Most applies to the other providers as well, any differences are described [below](#providers). -Properties: + First run -- Config: headers -- Env Var: RCLONE_HTTP_HEADERS -- Type: CommaSepList -- Default: + rclone config ---http-no-slash + This will guide you through an interactive setup process. -Set this if the site doesn't end directories with /. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. XX / InternetArchive Items  (internetarchive) Storage> +internetarchive Option access_key_id. IAS3 Access Key. Leave blank for +anonymous access. You can find one here: +https://archive.org/account/s3.php Enter a value. Press Enter to leave +empty. access_key_id> XXXX Option secret_access_key. IAS3 Secret Key +(password). Leave blank for anonymous access. Enter a value. Press Enter +to leave empty. secret_access_key> XXXX Edit advanced config? y) Yes n) +No (default) y/n> y Option endpoint. IAS3 Endpoint. Leave blank for +default value. Enter a string value. Press Enter for the default +(https://s3.us.archive.org). endpoint> Option front_endpoint. Host of +InternetArchive Frontend. Leave blank for default value. Enter a string +value. Press Enter for the default (https://archive.org). +front_endpoint> Option disable_checksum. Don't store MD5 checksum with +object metadata. Normally rclone will calculate the MD5 checksum of the +input before uploading it so it can ask the server to check the object +against checksum. This is great for data integrity checking but can +cause long delays for large files to start uploading. Enter a boolean +value (true or false). Press Enter for the default (true). +disable_checksum> true Option encoding. The encoding for the backend. +See the encoding section in the overview for more info. Enter a +encoder.MultiEncoder value. Press Enter for the default +(Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). encoding> Edit +advanced config? y) Yes n) No (default) y/n> n -------------------- +[remote] type = internetarchive access_key_id = XXXX secret_access_key = +XXXX -------------------- y) Yes this is OK (default) e) Edit this +remote d) Delete this remote y/e/d> y -Use this if your target website does not use / on the end of -directories. -A / on the end of a path is how rclone normally tells the difference -between files and directories. If this flag is set, then rclone will -treat all files with Content-Type: text/html as directories and read -URLs from them rather than downloading them. -Note that this may cause rclone to confuse genuine HTML files with -directories. + ### Standard options -Properties: + Here are the Standard options specific to internetarchive (Internet Archive). -- Config: no_slash -- Env Var: RCLONE_HTTP_NO_SLASH -- Type: bool -- Default: false + #### --internetarchive-access-key-id ---http-no-head - -Don't use HEAD requests. - -HEAD requests are mainly used to find file sizes in dir listing. If your -site is being very slow to load then you can try this option. Normally -rclone does a HEAD request for each potential file in a directory -listing to: - -- find its size -- check it really exists -- check to see if it is a directory - -If you set this option, rclone will not do the HEAD request. This will -mean that directory listings are much quicker, but rclone won't have the -times or sizes of any files, and some files that don't exist may be in -the listing. - -Properties: - -- Config: no_head -- Env Var: RCLONE_HTTP_NO_HEAD -- Type: bool -- Default: false - -Limitations - -rclone about is not supported by the HTTP backend. Backends without this -capability cannot determine free space for an rclone mount or use policy -mfs (most free space) as a member of an rclone union remote. - -See List of backends that do not support rclone about and rclone about - -Internet Archive - -The Internet Archive backend utilizes Items on archive.org - -Refer to IAS3 API documentation for the API this backend uses. - -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, e.g. remote:item/path/to/dir. - -Unlike S3, listing up all items uploaded by you isn't supported. - -Once you have made a remote, you can use it like this: - -Make a new item - - rclone mkdir remote:item - -List the contents of a item - - rclone ls remote:item - -Sync /home/local/directory to the remote item, deleting any excess files -in the item. - - rclone sync --interactive /home/local/directory remote:item - -Notes - -Because of Internet Archive's architecture, it enqueues write operations -(and extra post-processings) in a per-item queue. You can check item's -queue at https://catalogd.archive.org/history/item-name-here . Because -of that, all uploads/deletes will not show up immediately and takes some -time to be available. The per-item queue is enqueued to an another -queue, Item Deriver Queue. You can check the status of Item Deriver -Queue here. This queue has a limit, and it may block you from uploading, -or even deleting. You should avoid uploading a lot of small files for -better behavior. - -You can optionally wait for the server's processing to finish, by -setting non-zero value to wait_archive key. By making it wait, rclone -can do normal file comparison. Make sure to set a large enough value -(e.g. 30m0s for smaller files) as it can take a long time depending on -server's queue. - -About metadata - -This backend supports setting, updating and reading metadata of each -file. The metadata will appear as file metadata on Internet Archive. -However, some fields are reserved by both Internet Archive and rclone. - -The following are reserved by Internet Archive: - name - source - size - -md5 - crc32 - sha1 - format - old_version - viruscheck - summation - -Trying to set values to these keys is ignored with a warning. Only -setting mtime is an exception. Doing so make it the identical behavior -as setting ModTime. - -rclone reserves all the keys starting with rclone-. Setting value for -these keys will give you warnings, but values are set according to -request. - -If there are multiple values for a key, only the first one is returned. -This is a limitation of rclone, that supports one value per one key. It -can be triggered when you did a server-side copy. - -Reading metadata will also provide custom (non-standard nor reserved) -ones. - -Filtering auto generated files - -The Internet Archive automatically creates metadata files after upload. -These can cause problems when doing an rclone sync as rclone will try, -and fail, to delete them. These metadata files are not changeable, as -they are created by the Internet Archive automatically. - -These auto-created files can be excluded from the sync using metadata -filtering. - - rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" - -Which excludes from the sync any files which have the source=metadata or -format=Metadata flags which are added to Internet Archive auto-created -files. - -Configuration - -Here is an example of making an internetarchive configuration. Most -applies to the other providers as well, any differences are described -below. - -First run - - rclone config - -This will guide you through an interactive setup process. - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - XX / InternetArchive Items - \ (internetarchive) - Storage> internetarchive - Option access_key_id. IAS3 Access Key. + Leave blank for anonymous access. You can find one here: https://archive.org/account/s3.php - Enter a value. Press Enter to leave empty. - access_key_id> XXXX - Option secret_access_key. + + Properties: + + - Config: access_key_id + - Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID + - Type: string + - Required: false + + #### --internetarchive-secret-access-key + IAS3 Secret Key (password). + Leave blank for anonymous access. - Enter a value. Press Enter to leave empty. - secret_access_key> XXXX - Edit advanced config? - y) Yes - n) No (default) - y/n> y - Option endpoint. + + Properties: + + - Config: secret_access_key + - Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to internetarchive (Internet Archive). + + #### --internetarchive-endpoint + IAS3 Endpoint. + Leave blank for default value. - Enter a string value. Press Enter for the default (https://s3.us.archive.org). - endpoint> - Option front_endpoint. + + Properties: + + - Config: endpoint + - Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT + - Type: string + - Default: "https://s3.us.archive.org" + + #### --internetarchive-front-endpoint + Host of InternetArchive Frontend. + Leave blank for default value. - Enter a string value. Press Enter for the default (https://archive.org). - front_endpoint> - Option disable_checksum. - Don't store MD5 checksum with object metadata. + + Properties: + + - Config: front_endpoint + - Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT + - Type: string + - Default: "https://archive.org" + + #### --internetarchive-disable-checksum + + Don't ask the server to test against MD5 checksum calculated by rclone. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading. - Enter a boolean value (true or false). Press Enter for the default (true). - disable_checksum> true - Option encoding. + + Properties: + + - Config: disable_checksum + - Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM + - Type: bool + - Default: true + + #### --internetarchive-wait-archive + + Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish. + Only enable if you need to be guaranteed to be reflected after write operations. + 0 to disable waiting. No errors to be thrown in case of timeout. + + Properties: + + - Config: wait_archive + - Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE + - Type: Duration + - Default: 0s + + #### --internetarchive-encoding + The encoding for the backend. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). - encoding> - Edit advanced config? - y) Yes - n) No (default) - y/n> n - -------------------- - [remote] - type = internetarchive - access_key_id = XXXX - secret_access_key = XXXX - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -Standard options + Properties: -Here are the Standard options specific to internetarchive (Internet -Archive). + - Config: encoding + - Env Var: RCLONE_INTERNETARCHIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + + ### Metadata ---internetarchive-access-key-id + Metadata fields provided by Internet Archive. + If there are multiple values for a key, only the first one is returned. + This is a limitation of Rclone, that supports one value per one key. -IAS3 Access Key. + Owner is able to add custom keys. Metadata feature grabs all the keys including them. -Leave blank for anonymous access. You can find one here: -https://archive.org/account/s3.php + Here are the possible system metadata items for the internetarchive backend. -Properties: + | Name | Help | Type | Example | Read Only | + |------|------|------|---------|-----------| + | crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** | + | format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** | + | md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** | + | mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** | + | name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** | + | old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** | + | rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | + | rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | + | rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N | + | sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** | + | size | File size in bytes | decimal number | 123456 | **Y** | + | source | The source of the file | string | original | **Y** | + | summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** | + | viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** | -- Config: access_key_id -- Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID -- Type: string -- Required: false + See the [metadata](https://rclone.org/docs/#metadata) docs for more info. ---internetarchive-secret-access-key -IAS3 Secret Key (password). -Leave blank for anonymous access. + # Jottacloud -Properties: + Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters + in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), + it also provides white-label solutions to different companies, such as: + * Telia + * Telia Cloud (cloud.telia.se) + * Telia Sky (sky.telia.no) + * Tele2 + * Tele2 Cloud (mittcloud.tele2.se) + * Onlime + * Onlime Cloud Storage (onlime.dk) + * Elkjøp (with subsidiaries): + * Elkjøp Cloud (cloud.elkjop.no) + * Elgiganten Sweden (cloud.elgiganten.se) + * Elgiganten Denmark (cloud.elgiganten.dk) + * Giganti Cloud (cloud.gigantti.fi) + * ELKO Cloud (cloud.elko.is) + + Most of the white-label versions are supported by this backend, although may require different + authentication setup - described below. + + Paths are specified as `remote:path` + + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + + ## Authentication types + + Some of the whitelabel versions uses a different authentication method than the official service, + and you have to choose the correct one when setting up the remote. + + ### Standard authentication + + The standard authentication method used by the official service (jottacloud.com), as well as + some of the whitelabel services, requires you to generate a single-use personal login token + from the account security settings in the service's web interface. Log in to your account, + go to "Settings" and then "Security", or use the direct link presented to you by rclone when + configuring the remote: . Scroll down to the section + "Personal login token", and click the "Generate" button. Note that if you are using a + whitelabel service you probably can't use the direct link, you need to find the same page in + their dedicated web interface, and also it may be in a different location than described above. + + To access your account from multiple instances of rclone, you need to configure each of them + with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one + location, and copy the configuration file to a second location where you also want to run + rclone and access the same remote. Then you need to replace the token for one of them, using + the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which + requires you to generate a new personal login token and supply as input. If you do not + do this, the token may easily end up being invalidated, resulting in both instances failing + with an error message something along the lines of: + + oauth2: cannot fetch token: 400 Bad Request + Response: {"error":"invalid_grant","error_description":"Stale token"} + + When this happens, you need to replace the token as described above to be able to use your + remote again. + + All personal login tokens you have taken into use will be listed in the web interface under + "My logged in devices", and from the right side of that list you can click the "X" button to + revoke individual tokens. + + ### Legacy authentication + + If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option + to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select + yes when the setup asks for legacy authentication and enter your username and password. + The rest of the setup is identical to the default setup. + + ### Telia Cloud authentication + + Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and + additionally uses a separate authentication flow where the username is generated internally. To setup + rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is + identical to the default setup. + + ### Tele2 Cloud authentication + + As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and + Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate + authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, + choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup. + + ### Onlime Cloud Storage authentication + + Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but + have recently set up their own hosting, transferring their customers from Jottacloud servers to their + own ones. + + This, of course, necessitates using their servers for authentication, but otherwise functionality and + architecture seems equivalent to Jottacloud. + + To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest + of the setup is identical to the default setup. + + ## Configuration -- Config: secret_access_key -- Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY -- Type: string -- Required: false + Here is an example of how to make a remote called `remote` with the default setup. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. [snip] XX / Jottacloud  (jottacloud) [snip] Storage> jottacloud +Edit advanced config? y) Yes n) No (default) y/n> n Option config_type. +Select authentication type. Choose a number from below, or type in an +existing string value. Press Enter for the default (standard). / +Standard authentication. 1 | Use this if you're a normal Jottacloud +user.  (standard) / Legacy authentication. 2 | This is only required for +certain whitelabel versions of Jottacloud and not recommended for normal +users.  (legacy) / Telia Cloud authentication. 3 | Use this if you are +using Telia Cloud.  (telia) / Tele2 Cloud authentication. 4 | Use this +if you are using Tele2 Cloud.  (tele2) / Onlime Cloud authentication. 5 +| Use this if you are using Onlime Cloud.  (onlime) config_type> 1 +Personal login token. Generate here: +https://www.jottacloud.com/web/secure Login Token> Use a non-standard +device/mountpoint? Choosing no, the default, will let you access the +storage used for the archive section of the official Jottacloud client. +If you instead want to access the sync or the backup section, for +example, you must choose yes. y) Yes n) No (default) y/n> y Option +config_device. The device to use. In standard setup the built-in Jotta +device is used, which contains predefined mountpoints for archive, sync +etc. All other devices are treated as backup devices by the official +Jottacloud client. You may create a new by entering a unique name. +Choose a number from below, or type in your own string value. Press +Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta +config_device> 2 Option config_mountpoint. The mountpoint to use for the +built-in device Jotta. The standard setup is to use the Archive +mountpoint. Most other mountpoints have very limited support in rclone +and should generally be avoided. Choose a number from below, or type in +an existing string value. Press Enter for the default (Archive). 1 > +Archive 2 > Shared 3 > Sync config_mountpoint> 1 -------------------- +[remote] type = jottacloud configVersion = 1 client_id = jottacli +client_secret = tokenURL = +https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token +token = {........} username = 2940e57271a93d987d6f8a21 device = Jotta +mountpoint = Archive -------------------- y) Yes this is OK (default) e) +Edit this remote d) Delete this remote y/e/d> y + + + Once configured you can then use `rclone` like this, + + List directories in top level of your Jottacloud + + rclone lsd remote: + + List all the files in your Jottacloud -Advanced options + rclone ls remote: + + To copy a local directory to an Jottacloud directory called backup -Here are the Advanced options specific to internetarchive (Internet -Archive). + rclone copy /home/source remote:backup ---internetarchive-endpoint + ### Devices and Mountpoints -IAS3 Endpoint. + The official Jottacloud client registers a device for each computer you install + it on, and shows them in the backup section of the user interface. For each + folder you select for backup it will create a mountpoint within this device. + A built-in device called Jotta is special, and contains mountpoints Archive, + Sync and some others, used for corresponding features in official clients. -Leave blank for default value. + With rclone you'll want to use the standard Jotta/Archive device/mountpoint in + most cases. However, you may for example want to access files from the sync or + backup functionality provided by the official clients, and rclone therefore + provides the option to select other devices and mountpoints during config. -Properties: + You are allowed to create new devices and mountpoints. All devices except the + built-in Jotta device are treated as backup devices by official Jottacloud + clients, and the mountpoints on them are individual backup sets. -- Config: endpoint -- Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT -- Type: string -- Default: "https://s3.us.archive.org" + With the built-in Jotta device, only existing, built-in, mountpoints can be + selected. In addition to the mentioned Archive and Sync, it may contain + several other mountpoints such as: Latest, Links, Shared and Trash. All of + these are special mountpoints with a different internal representation than + the "regular" mountpoints. Rclone will only to a very limited degree support + them. Generally you should avoid these, unless you know what you are doing. ---internetarchive-front-endpoint + ### --fast-list -Host of InternetArchive Frontend. + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. -Leave blank for default value. + Note that the implementation in Jottacloud always uses only a single + API request to get the entire list, so for large folders this could + lead to long wait time before the first results are shown. -Properties: + Note also that with rclone version 1.58 and newer information about + [MIME types](https://rclone.org/overview/#mime-type) are not available when using `--fast-list`. -- Config: front_endpoint -- Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT -- Type: string -- Default: "https://archive.org" + ### Modified time and hashes ---internetarchive-disable-checksum + Jottacloud allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. -Don't ask the server to test against MD5 checksum calculated by rclone. -Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can ask the server to check the object against -checksum. This is great for data integrity checking but can cause long -delays for large files to start uploading. + Jottacloud supports MD5 type hashes, so you can use the `--checksum` + flag. -Properties: + Note that Jottacloud requires the MD5 hash before upload so if the + source does not have an MD5 checksum then the file will be cached + temporarily on disk (in location given by + [--temp-dir](https://rclone.org/docs/#temp-dir-dir)) before it is uploaded. + Small files will be cached in memory - see the + [--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag. + When uploading from local disk the source checksum is always available, + so this does not apply. Starting with rclone version 1.52 the same is + true for encrypted remotes (in older versions the crypt backend would not + calculate hashes for uploads from local disk, so the Jottacloud + backend had to do it as described above). -- Config: disable_checksum -- Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM -- Type: bool -- Default: true + ### Restricted filename characters ---internetarchive-wait-archive + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: -Timeout for waiting the server's processing tasks (specifically archive -and book_op) to finish. Only enable if you need to be guaranteed to be -reflected after write operations. 0 to disable waiting. No errors to be -thrown in case of timeout. + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | " | 0x22 | " | + | * | 0x2A | * | + | : | 0x3A | : | + | < | 0x3C | < | + | > | 0x3E | > | + | ? | 0x3F | ? | + | \| | 0x7C | | | -Properties: + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in XML strings. -- Config: wait_archive -- Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE -- Type: Duration -- Default: 0s + ### Deleting files ---internetarchive-encoding + By default, rclone will send all files to the trash when deleting files. They will be permanently + deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately + by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. + Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command. -The encoding for the backend. + ### Versions -See the encoding section in the overview for more info. + Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. + Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. -Properties: + Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading + a new version. If the upload the fails no version of the file will be available in the remote. -- Config: encoding -- Env Var: RCLONE_INTERNETARCHIVE_ENCODING -- Type: MultiEncoder -- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + ### Quota information -Metadata + To view your current quota you can use the `rclone about remote:` + command which will display your usage limit (unless it is unlimited) + and the current usage. -Metadata fields provided by Internet Archive. If there are multiple -values for a key, only the first one is returned. This is a limitation -of Rclone, that supports one value per one key. -Owner is able to add custom keys. Metadata feature grabs all the keys -including them. + ### Standard options -Here are the possible system metadata items for the internetarchive -backend. + Here are the Standard options specific to jottacloud (Jottacloud). - -------------------------------------------------------------------------------------------------------------------------------------- - Name Help Type Example Read Only - --------------------- ---------------------------------- ----------- -------------------------------------------- -------------------- - crc32 CRC32 calculated by Internet string 01234567 Y - Archive + #### --jottacloud-client-id - format Name of format identified by string Comma-Separated Values Y - Internet Archive + OAuth Client Id. - md5 MD5 hash calculated by Internet string 01234567012345670123456701234567 Y - Archive + Leave blank normally. - mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z Y - by Rclone + Properties: - name Full file path, without the bucket filename backend/internetarchive/internetarchive.go Y - part + - Config: client_id + - Env Var: RCLONE_JOTTACLOUD_CLIENT_ID + - Type: string + - Required: false - old_version Whether the file was replaced and boolean true Y - moved by keep-old-version flag + #### --jottacloud-client-secret - rclone-ia-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N - by Internet Archive + OAuth Client Secret. - rclone-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N - by Rclone + Leave blank normally. - rclone-update-track Random value used by Rclone for string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N - tracking changes inside Internet - Archive + Properties: - sha1 SHA1 hash calculated by Internet string 0123456701234567012345670123456701234567 Y - Archive + - Config: client_secret + - Env Var: RCLONE_JOTTACLOUD_CLIENT_SECRET + - Type: string + - Required: false - size File size in bytes decimal 123456 Y - number + ### Advanced options - source The source of the file string original Y + Here are the Advanced options specific to jottacloud (Jottacloud). - summation Check string md5 Y - https://forum.rclone.org/t/31922 - for how it is used + #### --jottacloud-token - viruscheck The last time viruscheck process unixtime 1654191352 Y - was run for the file (?) - -------------------------------------------------------------------------------------------------------------------------------------- + OAuth Access Token as a JSON blob. -See the metadata docs for more info. + Properties: -Jottacloud + - Config: token + - Env Var: RCLONE_JOTTACLOUD_TOKEN + - Type: string + - Required: false -Jottacloud is a cloud storage service provider from a Norwegian company, -using its own datacenters in Norway. In addition to the official service -at jottacloud.com, it also provides white-label solutions to different -companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky -(sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with -subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden -(cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * -Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is) + #### --jottacloud-auth-url -Most of the white-label versions are supported by this backend, although -may require different authentication setup - described below. + Auth server URL. -Paths are specified as remote:path + Leave blank to use the provider defaults. -Paths may be as deep as required, e.g. remote:directory/subdirectory. + Properties: -Authentication types - -Some of the whitelabel versions uses a different authentication method -than the official service, and you have to choose the correct one when -setting up the remote. - -Standard authentication - -The standard authentication method used by the official service -(jottacloud.com), as well as some of the whitelabel services, requires -you to generate a single-use personal login token from the account -security settings in the service's web interface. Log in to your -account, go to "Settings" and then "Security", or use the direct link -presented to you by rclone when configuring the remote: -https://www.jottacloud.com/web/secure. Scroll down to the section -"Personal login token", and click the "Generate" button. Note that if -you are using a whitelabel service you probably can't use the direct -link, you need to find the same page in their dedicated web interface, -and also it may be in a different location than described above. - -To access your account from multiple instances of rclone, you need to -configure each of them with a separate personal login token. E.g. you -create a Jottacloud remote with rclone in one location, and copy the -configuration file to a second location where you also want to run -rclone and access the same remote. Then you need to replace the token -for one of them, using the config reconnect command, which requires you -to generate a new personal login token and supply as input. If you do -not do this, the token may easily end up being invalidated, resulting in -both instances failing with an error message something along the lines -of: - - oauth2: cannot fetch token: 400 Bad Request - Response: {"error":"invalid_grant","error_description":"Stale token"} - -When this happens, you need to replace the token as described above to -be able to use your remote again. - -All personal login tokens you have taken into use will be listed in the -web interface under "My logged in devices", and from the right side of -that list you can click the "X" button to revoke individual tokens. - -Legacy authentication - -If you are using one of the whitelabel versions (e.g. from Elkjøp) you -may not have the option to generate a CLI token. In this case you'll -have to use the legacy authentication. To do this select yes when the -setup asks for legacy authentication and enter your username and -password. The rest of the setup is identical to the default setup. - -Telia Cloud authentication - -Similar to other whitelabel versions Telia Cloud doesn't offer the -option of creating a CLI token, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Telia Cloud, choose Telia Cloud authentication in the -setup. The rest of the setup is identical to the default setup. - -Tele2 Cloud authentication - -As Tele2-Com Hem merger was completed this authentication can be used -for former Com Hem Cloud and Tele2 Cloud customers as no support for -creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the -setup. The rest of the setup is identical to the default setup. - -Configuration - -Here is an example of how to make a remote called remote with the -default setup. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - XX / Jottacloud - \ (jottacloud) - [snip] - Storage> jottacloud - Edit advanced config? - y) Yes - n) No (default) - y/n> n - Option config_type. - Select authentication type. - Choose a number from below, or type in an existing string value. - Press Enter for the default (standard). - / Standard authentication. - 1 | Use this if you're a normal Jottacloud user. - \ (standard) - / Legacy authentication. - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. - \ (legacy) - / Telia Cloud authentication. - 3 | Use this if you are using Telia Cloud. - \ (telia) - / Tele2 Cloud authentication. - 4 | Use this if you are using Tele2 Cloud. - \ (tele2) - config_type> 1 - Personal login token. - Generate here: https://www.jottacloud.com/web/secure - Login Token> - Use a non-standard device/mountpoint? - Choosing no, the default, will let you access the storage used for the archive - section of the official Jottacloud client. If you instead want to access the - sync or the backup section, for example, you must choose yes. - y) Yes - n) No (default) - y/n> y - Option config_device. - The device to use. In standard setup the built-in Jotta device is used, - which contains predefined mountpoints for archive, sync etc. All other devices - are treated as backup devices by the official Jottacloud client. You may create - a new by entering a unique name. - Choose a number from below, or type in your own string value. - Press Enter for the default (DESKTOP-3H31129). - 1 > DESKTOP-3H31129 - 2 > Jotta - config_device> 2 - Option config_mountpoint. - The mountpoint to use for the built-in device Jotta. - The standard setup is to use the Archive mountpoint. Most other mountpoints - have very limited support in rclone and should generally be avoided. - Choose a number from below, or type in an existing string value. - Press Enter for the default (Archive). - 1 > Archive - 2 > Shared - 3 > Sync - config_mountpoint> 1 - -------------------- - [remote] - type = jottacloud - configVersion = 1 - client_id = jottacli - client_secret = - tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token - token = {........} - username = 2940e57271a93d987d6f8a21 - device = Jotta - mountpoint = Archive - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - -Once configured you can then use rclone like this, - -List directories in top level of your Jottacloud - - rclone lsd remote: - -List all the files in your Jottacloud - - rclone ls remote: - -To copy a local directory to an Jottacloud directory called backup - - rclone copy /home/source remote:backup - -Devices and Mountpoints - -The official Jottacloud client registers a device for each computer you -install it on, and shows them in the backup section of the user -interface. For each folder you select for backup it will create a -mountpoint within this device. A built-in device called Jotta is -special, and contains mountpoints Archive, Sync and some others, used -for corresponding features in official clients. - -With rclone you'll want to use the standard Jotta/Archive -device/mountpoint in most cases. However, you may for example want to -access files from the sync or backup functionality provided by the -official clients, and rclone therefore provides the option to select -other devices and mountpoints during config. - -You are allowed to create new devices and mountpoints. All devices -except the built-in Jotta device are treated as backup devices by -official Jottacloud clients, and the mountpoints on them are individual -backup sets. - -With the built-in Jotta device, only existing, built-in, mountpoints can -be selected. In addition to the mentioned Archive and Sync, it may -contain several other mountpoints such as: Latest, Links, Shared and -Trash. All of these are special mountpoints with a different internal -representation than the "regular" mountpoints. Rclone will only to a -very limited degree support them. Generally you should avoid these, -unless you know what you are doing. - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Note that the implementation in Jottacloud always uses only a single API -request to get the entire list, so for large folders this could lead to -long wait time before the first results are shown. - -Note also that with rclone version 1.58 and newer information about MIME -types are not available when using --fast-list. - -Modified time and hashes - -Jottacloud allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -Jottacloud supports MD5 type hashes, so you can use the --checksum flag. - -Note that Jottacloud requires the MD5 hash before upload so if the -source does not have an MD5 checksum then the file will be cached -temporarily on disk (in location given by --temp-dir) before it is -uploaded. Small files will be cached in memory - see the ---jottacloud-md5-memory-limit flag. When uploading from local disk the -source checksum is always available, so this does not apply. Starting -with rclone version 1.52 the same is true for encrypted remotes (in -older versions the crypt backend would not calculate hashes for uploads -from local disk, so the Jottacloud backend had to do it as described -above). + - Config: auth_url + - Env Var: RCLONE_JOTTACLOUD_AUTH_URL + - Type: string + - Required: false -Restricted filename characters + #### --jottacloud-token-url -In addition to the default restricted characters set the following -characters are also replaced: + Token server url. - Character Value Replacement - ----------- ------- ------------- - " 0x22 " - * 0x2A * - : 0x3A : - < 0x3C < - > 0x3E > - ? 0x3F ? - | 0x7C | + Leave blank to use the provider defaults. -Invalid UTF-8 bytes will also be replaced, as they can't be used in XML -strings. + Properties: -Deleting files + - Config: token_url + - Env Var: RCLONE_JOTTACLOUD_TOKEN_URL + - Type: string + - Required: false -By default, rclone will send all files to the trash when deleting files. -They will be permanently deleted automatically after 30 days. You may -bypass the trash and permanently delete files immediately by using the ---jottacloud-hard-delete flag, or set the equivalent environment -variable. Emptying the trash is supported by the cleanup command. + #### --jottacloud-md5-memory-limit -Versions + Files bigger than this will be cached on disk to calculate the MD5 if required. -Jottacloud supports file versioning. When rclone uploads a new version -of a file it creates a new version of it. Currently rclone only supports -retrieving the current version but older versions can be accessed via -the Jottacloud Website. + Properties: -Versioning can be disabled by --jottacloud-no-versions option. This is -achieved by deleting the remote file prior to uploading a new version. -If the upload the fails no version of the file will be available in the -remote. + - Config: md5_memory_limit + - Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT + - Type: SizeSuffix + - Default: 10Mi -Quota information + #### --jottacloud-trashed-only -To view your current quota you can use the rclone about remote: command -which will display your usage limit (unless it is unlimited) and the -current usage. + Only show files that are in the trash. -Advanced options + This will show trashed files in their original directory structure. -Here are the Advanced options specific to jottacloud (Jottacloud). + Properties: ---jottacloud-md5-memory-limit + - Config: trashed_only + - Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY + - Type: bool + - Default: false -Files bigger than this will be cached on disk to calculate the MD5 if -required. + #### --jottacloud-hard-delete -Properties: + Delete files permanently rather than putting them into the trash. -- Config: md5_memory_limit -- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT -- Type: SizeSuffix -- Default: 10Mi + Properties: ---jottacloud-trashed-only + - Config: hard_delete + - Env Var: RCLONE_JOTTACLOUD_HARD_DELETE + - Type: bool + - Default: false -Only show files that are in the trash. + #### --jottacloud-upload-resume-limit -This will show trashed files in their original directory structure. + Files bigger than this can be resumed if the upload fail's. -Properties: + Properties: -- Config: trashed_only -- Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY -- Type: bool -- Default: false + - Config: upload_resume_limit + - Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT + - Type: SizeSuffix + - Default: 10Mi ---jottacloud-hard-delete + #### --jottacloud-no-versions -Delete files permanently rather than putting them into the trash. + Avoid server side versioning by deleting files and recreating files instead of overwriting them. -Properties: + Properties: -- Config: hard_delete -- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE -- Type: bool -- Default: false + - Config: no_versions + - Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS + - Type: bool + - Default: false ---jottacloud-upload-resume-limit + #### --jottacloud-encoding -Files bigger than this can be resumed if the upload fail's. + The encoding for the backend. -Properties: + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -- Config: upload_resume_limit -- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT -- Type: SizeSuffix -- Default: 10Mi + Properties: ---jottacloud-no-versions + - Config: encoding + - Env Var: RCLONE_JOTTACLOUD_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot -Avoid server side versioning by deleting files and recreating files -instead of overwriting them. -Properties: -- Config: no_versions -- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS -- Type: bool -- Default: false + ## Limitations ---jottacloud-encoding + Note that Jottacloud is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". -The encoding for the backend. + There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical + looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. -See the encoding section in the overview for more info. + Jottacloud only supports filenames up to 255 characters in length. -Properties: + ## Troubleshooting -- Config: encoding -- Env Var: RCLONE_JOTTACLOUD_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot + Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove + operations to previously deleted paths to fail. Emptying the trash should help in such cases. -Limitations + # Koofr -Note that Jottacloud is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". + Paths are specified as `remote:path` -There are quite a few characters that can't be in Jottacloud file names. -Rclone will map these names to and from an identical looking unicode -equivalent. For example if a file has a ? in it will be mapped to ? -instead. + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -Jottacloud only supports filenames up to 255 characters in length. + ## Configuration -Troubleshooting + The initial setup for Koofr involves creating an application password for + rclone. You can do that by opening the Koofr + [web application](https://app.koofr.net/app/admin/preferences/password), + giving the password a nice name like `rclone` and clicking on generate. -Jottacloud exhibits some inconsistent behaviours regarding deleted files -and folders which may cause Copy, Move and DirMove operations to -previously deleted paths to fail. Emptying the trash should help in such -cases. + Here is an example of how to make a remote called `koofr`. First run: -Koofr + rclone config -Paths are specified as remote:path + This will guide you through an interactive setup process: -Paths may be as deep as required, e.g. remote:directory/subdirectory. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> koofr Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible +storage providers  (koofr) [snip] Storage> koofr Option provider. Choose +your storage provider. Choose a number from below, or type in your own +value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/ + (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 +/ Any other Koofr API compatible storage service  (other) provider> 1 +Option user. Your user name. Enter a value. user> USERNAME Option +password. Your password for rclone (generate one at +https://app.koofr.net/app/admin/preferences/password). Choose an +alternative below. y) Yes, type in my own password g) Generate random +password y/g> y Enter the password: password: Confirm the password: +password: Edit advanced config? y) Yes n) No (default) y/n> n Remote +config -------------------- [koofr] type = koofr provider = koofr user = +USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this +is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -Configuration -The initial setup for Koofr involves creating an application password -for rclone. You can do that by opening the Koofr web application, giving -the password a nice name like rclone and clicking on generate. + You can choose to edit advanced config in order to enter your own service URL + if you use an on-premise or white label Koofr instance, or choose an alternative + mount instead of your primary storage. -Here is an example of how to make a remote called koofr. First run: + Once configured you can then use `rclone` like this, - rclone config + List directories in top level of your Koofr -This will guide you through an interactive setup process: + rclone lsd koofr: + + List all the files in your Koofr + + rclone ls koofr: + + To copy a local directory to an Koofr directory called backup + + rclone copy /home/source koofr:backup + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in XML strings. + + + ### Standard options + + Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). + + #### --koofr-provider - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> koofr - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - 22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \ (koofr) - [snip] - Storage> koofr - Option provider. Choose your storage provider. - Choose a number from below, or type in your own value. - Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \ (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \ (digistorage) - 3 / Any other Koofr API compatible storage service - \ (other) - provider> 1 - Option user. - Your user name. - Enter a value. - user> USERNAME - Option password. - Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). - Choose an alternative below. - y) Yes, type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [koofr] - type = koofr - provider = koofr - user = USERNAME - password = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -You can choose to edit advanced config in order to enter your own -service URL if you use an on-premise or white label Koofr instance, or -choose an alternative mount instead of your primary storage. + Properties: -Once configured you can then use rclone like this, + - Config: provider + - Env Var: RCLONE_KOOFR_PROVIDER + - Type: string + - Required: false + - Examples: + - "koofr" + - Koofr, https://app.koofr.net/ + - "digistorage" + - Digi Storage, https://storage.rcs-rds.ro/ + - "other" + - Any other Koofr API compatible storage service -List directories in top level of your Koofr + #### --koofr-endpoint - rclone lsd koofr: - -List all the files in your Koofr - - rclone ls koofr: - -To copy a local directory to an Koofr directory called backup - - rclone copy /home/source koofr:backup - -Restricted filename characters - -In addition to the default restricted characters set the following -characters are also replaced: - - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - -Invalid UTF-8 bytes will also be replaced, as they can't be used in XML -strings. - -Standard options - -Here are the Standard options specific to koofr (Koofr, Digi Storage and -other Koofr-compatible storage providers). - ---koofr-provider - -Choose your storage provider. - -Properties: - -- Config: provider -- Env Var: RCLONE_KOOFR_PROVIDER -- Type: string -- Required: false -- Examples: - - "koofr" - - Koofr, https://app.koofr.net/ - - "digistorage" - - Digi Storage, https://storage.rcs-rds.ro/ - - "other" - - Any other Koofr API compatible storage service - ---koofr-endpoint - -The Koofr API endpoint to use. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_KOOFR_ENDPOINT -- Provider: other -- Type: string -- Required: true - ---koofr-user - -Your user name. - -Properties: - -- Config: user -- Env Var: RCLONE_KOOFR_USER -- Type: string -- Required: true - ---koofr-password - -Your password for rclone (generate one at -https://app.koofr.net/app/admin/preferences/password). - -NB Input to this must be obscured - see rclone obscure. - -Properties: - -- Config: password -- Env Var: RCLONE_KOOFR_PASSWORD -- Provider: koofr -- Type: string -- Required: true - ---koofr-password - -Your password for rclone (generate one at -https://storage.rcs-rds.ro/app/admin/preferences/password). - -NB Input to this must be obscured - see rclone obscure. - -Properties: - -- Config: password -- Env Var: RCLONE_KOOFR_PASSWORD -- Provider: digistorage -- Type: string -- Required: true - ---koofr-password - -Your password for rclone (generate one at your service's settings page). - -NB Input to this must be obscured - see rclone obscure. - -Properties: - -- Config: password -- Env Var: RCLONE_KOOFR_PASSWORD -- Provider: other -- Type: string -- Required: true - -Advanced options - -Here are the Advanced options specific to koofr (Koofr, Digi Storage and -other Koofr-compatible storage providers). - ---koofr-mountid - -Mount ID of the mount to use. - -If omitted, the primary mount is used. - -Properties: - -- Config: mountid -- Env Var: RCLONE_KOOFR_MOUNTID -- Type: string -- Required: false - ---koofr-setmtime - -Does the backend support setting modification time. - -Set this to false if you use a mount ID that points to a Dropbox or -Amazon Drive backend. - -Properties: - -- Config: setmtime -- Env Var: RCLONE_KOOFR_SETMTIME -- Type: bool -- Default: true - ---koofr-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_KOOFR_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - -Limitations - -Note that Koofr is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -Providers - -Koofr - -This is the original Koofr storage provider used as main example and -described in the configuration section above. - -Digi Storage - -Digi Storage is a cloud storage service run by Digi.ro that provides a -Koofr API. - -Here is an example of how to make a remote called ds. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> ds - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - 22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \ (koofr) - [snip] - Storage> koofr - Option provider. - Choose your storage provider. - Choose a number from below, or type in your own value. - Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \ (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \ (digistorage) - 3 / Any other Koofr API compatible storage service - \ (other) - provider> 2 - Option user. - Your user name. - Enter a value. - user> USERNAME - Option password. - Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password). - Choose an alternative below. - y) Yes, type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n - -------------------- - [ds] - type = koofr - provider = digistorage - user = USERNAME - password = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - -Other - -You may also want to use another, public or private storage provider -that runs a Koofr API compatible service, by simply providing the base -URL to connect to. - -Here is an example of how to make a remote called other. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> other - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - 22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \ (koofr) - [snip] - Storage> koofr - Option provider. - Choose your storage provider. - Choose a number from below, or type in your own value. - Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \ (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \ (digistorage) - 3 / Any other Koofr API compatible storage service - \ (other) - provider> 3 - Option endpoint. The Koofr API endpoint to use. - Enter a value. - endpoint> https://koofr.other.org - Option user. + + Properties: + + - Config: endpoint + - Env Var: RCLONE_KOOFR_ENDPOINT + - Provider: other + - Type: string + - Required: true + + #### --koofr-user + Your user name. - Enter a value. - user> USERNAME - Option password. + + Properties: + + - Config: user + - Env Var: RCLONE_KOOFR_USER + - Type: string + - Required: true + + #### --koofr-password + + Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_KOOFR_PASSWORD + - Provider: koofr + - Type: string + - Required: true + + #### --koofr-password + + Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password). + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_KOOFR_PASSWORD + - Provider: digistorage + - Type: string + - Required: true + + #### --koofr-password + Your password for rclone (generate one at your service's settings page). - Choose an alternative below. - y) Yes, type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n - -------------------- - [other] - type = koofr - provider = other - endpoint = https://koofr.other.org - user = USERNAME - password = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -Mail.ru Cloud + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Mail.ru Cloud is a cloud storage provided by a Russian internet company -Mail.Ru Group. The official desktop client is Disk-O:, available on -Windows and Mac OS. + Properties: -Currently it is recommended to disable 2FA on Mail.ru accounts intended -for rclone until it gets eventually implemented. + - Config: password + - Env Var: RCLONE_KOOFR_PASSWORD + - Provider: other + - Type: string + - Required: true -Features highlights + ### Advanced options -- Paths may be as deep as required, e.g. remote:directory/subdirectory -- Files have a last modified time property, directories don't -- Deleted files are by default moved to the trash -- Files and directories can be shared via public links -- Partial uploads or streaming are not supported, file size must be - known before upload -- Maximum file size is limited to 2G for a free account, unlimited for - paid accounts -- Storage keeps hash for all files and performs transparent - deduplication, the hash algorithm is a modified SHA1 -- If a particular file is already present in storage, one can quickly - submit file hash instead of long file upload (this optimization is - supported by rclone) + Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). -Configuration + #### --koofr-mountid -Here is an example of making a mailru configuration. + Mount ID of the mount to use. -First create a Mail.ru Cloud account and choose a tariff. + If omitted, the primary mount is used. -You will need to log in and create an app password for rclone. Rclone -will not work with your normal username and password - it will give an -error like oauth2: server response missing access_token. + Properties: -- Click on your user icon in the top right -- Go to Security / "Пароль и безопасность" -- Click password for apps / "Пароли для внешних приложений" -- Add the password - give it a name - eg "rclone" -- Copy the password and use this password below - your normal login - password won't work. + - Config: mountid + - Env Var: RCLONE_KOOFR_MOUNTID + - Type: string + - Required: false -Now run + #### --koofr-setmtime - rclone config + Does the backend support setting modification time. -This will guide you through an interactive setup process: + Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Mail.ru Cloud - \ "mailru" - [snip] - Storage> mailru - User name (usually email) - Enter a string value. Press Enter for the default (""). - user> username@mail.ru - Password + Properties: + + - Config: setmtime + - Env Var: RCLONE_KOOFR_SETMTIME + - Type: bool + - Default: true + + #### --koofr-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_KOOFR_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + + ## Limitations + + Note that Koofr is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". + + ## Providers + + ### Koofr + + This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above. + + ### Digi Storage + + [Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that + provides a Koofr API. + + Here is an example of how to make a remote called `ds`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> ds Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible +storage providers  (koofr) [snip] Storage> koofr Option provider. Choose +your storage provider. Choose a number from below, or type in your own +value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/ + (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 +/ Any other Koofr API compatible storage service  (other) provider> 2 +Option user. Your user name. Enter a value. user> USERNAME Option +password. Your password for rclone (generate one at +https://storage.rcs-rds.ro/app/admin/preferences/password). Choose an +alternative below. y) Yes, type in my own password g) Generate random +password y/g> y Enter the password: password: Confirm the password: +password: Edit advanced config? y) Yes n) No (default) y/n> n +-------------------- [ds] type = koofr provider = digistorage user = +USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this +is OK (default) e) Edit this remote d) Delete this remote y/e/d> y + + + ### Other + + You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to. + + Here is an example of how to make a remote called `other`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> other Option Storage. Type of +storage to configure. Choose a number from below, or type in your own +value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible +storage providers  (koofr) [snip] Storage> koofr Option provider. Choose +your storage provider. Choose a number from below, or type in your own +value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/ + (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/  (digistorage) 3 +/ Any other Koofr API compatible storage service  (other) provider> 3 +Option endpoint. The Koofr API endpoint to use. Enter a value. endpoint> +https://koofr.other.org Option user. Your user name. Enter a value. +user> USERNAME Option password. Your password for rclone (generate one +at your service's settings page). Choose an alternative below. y) Yes, +type in my own password g) Generate random password y/g> y Enter the +password: password: Confirm the password: password: Edit advanced +config? y) Yes n) No (default) y/n> n -------------------- [other] type += koofr provider = other endpoint = https://koofr.other.org user = +USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this +is OK (default) e) Edit this remote d) Delete this remote y/e/d> y + + + # Mail.ru Cloud + + [Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS. + + ## Features highlights + + - Paths may be as deep as required, e.g. `remote:directory/subdirectory` + - Files have a `last modified time` property, directories don't + - Deleted files are by default moved to the trash + - Files and directories can be shared via public links + - Partial uploads or streaming are not supported, file size must be known before upload + - Maximum file size is limited to 2G for a free account, unlimited for paid accounts + - Storage keeps hash for all files and performs transparent deduplication, + the hash algorithm is a modified SHA1 + - If a particular file is already present in storage, one can quickly submit file hash + instead of long file upload (this optimization is supported by rclone) + + ## Configuration + + Here is an example of making a mailru configuration. + + First create a Mail.ru Cloud account and choose a tariff. + + You will need to log in and create an app password for rclone. Rclone + **will not work** with your normal username and password - it will + give an error like `oauth2: server response missing access_token`. + + - Click on your user icon in the top right + - Go to Security / "Пароль и безопасность" + - Click password for apps / "Пароли для внешних приложений" + - Add the password - give it a name - eg "rclone" + - Copy the password and use this password below - your normal login password won't work. + + Now run + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Type of storage to configure. Enter a string value. Press +Enter for the default (""). Choose a number from below, or type in your +own value [snip] XX / Mail.ru Cloud  "mailru" [snip] Storage> mailru +User name (usually email) Enter a string value. Press Enter for the +default (""). user> username@mail.ru Password + +This must be an app password - rclone will not work with your normal +password. See the Configuration section in the docs for how to make an +app password. y) Yes type in my own password g) Generate random password +y/g> y Enter the password: password: Confirm the password: password: +Skip full upload if there is another file with same data hash. This +feature is called "speedup" or "put by hash". It is especially efficient +in case of generally available files like popular books, video or audio +clips [snip] Enter a boolean value (true or false). Press Enter for the +default ("true"). Choose a number from below, or type in your own value +1 / Enable  "true" 2 / Disable  "false" speedup_enable> 1 Edit advanced +config? (y/n) y) Yes n) No y/n> n Remote config -------------------- +[remote] type = mailru user = username@mail.ru pass = *** ENCRYPTED *** +speedup_enable = true -------------------- y) Yes this is OK e) Edit +this remote d) Delete this remote y/e/d> y + + + Configuration of this backend does not require a local web browser. + You can use the configured backend as shown below: + + See top level directories + + rclone lsd remote: + + Make a new directory + + rclone mkdir remote:directory + + List the contents of a directory + + rclone ls remote:directory + + Sync `/home/local/directory` to the remote path, deleting any + excess files in the path. + + rclone sync --interactive /home/local/directory remote:directory + + ### Modified time + + Files support a modification time attribute with up to 1 second precision. + Directories do not have a modification time, which is shown as "Jan 1 1970". + + ### Hash checksums + + Hash sums use a custom Mail.ru algorithm based on SHA1. + If file size is less than or equal to the SHA1 block size (20 bytes), + its hash is simply its data right-padded with zero bytes. + Hash sum of a larger file is computed as a SHA1 sum of the file data + bytes concatenated with a decimal representation of the data length. + + ### Emptying Trash + + Removing a file or directory actually moves it to the trash, which is not + visible to rclone but can be seen in a web browser. The trashed file + still occupies part of total quota. If you wish to empty your trash + and free some quota, you can use the `rclone cleanup remote:` command, + which will permanently delete all your trashed files. + This command does not take any path arguments. + + ### Quota information + + To view your current quota you can use the `rclone about remote:` + command which will display your usage limit (quota) and the current usage. + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | " | 0x22 | " | + | * | 0x2A | * | + | : | 0x3A | : | + | < | 0x3C | < | + | > | 0x3E | > | + | ? | 0x3F | ? | + | \ | 0x5C | \ | + | \| | 0x7C | | | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + + ### Standard options + + Here are the Standard options specific to mailru (Mail.ru Cloud). + + #### --mailru-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_MAILRU_CLIENT_ID + - Type: string + - Required: false + + #### --mailru-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_MAILRU_CLIENT_SECRET + - Type: string + - Required: false + + #### --mailru-user + + User name (usually email). + + Properties: + + - Config: user + - Env Var: RCLONE_MAILRU_USER + - Type: string + - Required: true + + #### --mailru-pass + + Password. This must be an app password - rclone will not work with your normal password. See the Configuration section in the docs for how to make an app password. - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: + + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: pass + - Env Var: RCLONE_MAILRU_PASS + - Type: string + - Required: true + + #### --mailru-speedup-enable + Skip full upload if there is another file with same data hash. + This feature is called "speedup" or "put by hash". It is especially efficient - in case of generally available files like popular books, video or audio clips - [snip] - Enter a boolean value (true or false). Press Enter for the default ("true"). - Choose a number from below, or type in your own value - 1 / Enable - \ "true" - 2 / Disable - \ "false" - speedup_enable> 1 - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - -------------------- - [remote] - type = mailru - user = username@mail.ru - pass = *** ENCRYPTED *** - speedup_enable = true - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + in case of generally available files like popular books, video or audio clips, + because files are searched by hash in all accounts of all mailru users. + It is meaningless and ineffective if source file is unique or encrypted. + Please note that rclone may need local memory and disk space to calculate + content hash in advance and decide whether full upload is required. + Also, if rclone does not know file size in advance (e.g. in case of + streaming or partial uploads), it will not even try this optimization. -Configuration of this backend does not require a local web browser. You -can use the configured backend as shown below: + Properties: -See top level directories + - Config: speedup_enable + - Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE + - Type: bool + - Default: true + - Examples: + - "true" + - Enable + - "false" + - Disable - rclone lsd remote: + ### Advanced options -Make a new directory + Here are the Advanced options specific to mailru (Mail.ru Cloud). - rclone mkdir remote:directory + #### --mailru-token -List the contents of a directory + OAuth Access Token as a JSON blob. - rclone ls remote:directory + Properties: -Sync /home/local/directory to the remote path, deleting any excess files -in the path. + - Config: token + - Env Var: RCLONE_MAILRU_TOKEN + - Type: string + - Required: false - rclone sync --interactive /home/local/directory remote:directory + #### --mailru-auth-url -Modified time + Auth server URL. -Files support a modification time attribute with up to 1 second -precision. Directories do not have a modification time, which is shown -as "Jan 1 1970". + Leave blank to use the provider defaults. -Hash checksums + Properties: -Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is -less than or equal to the SHA1 block size (20 bytes), its hash is simply -its data right-padded with zero bytes. Hash sum of a larger file is -computed as a SHA1 sum of the file data bytes concatenated with a -decimal representation of the data length. + - Config: auth_url + - Env Var: RCLONE_MAILRU_AUTH_URL + - Type: string + - Required: false -Emptying Trash + #### --mailru-token-url -Removing a file or directory actually moves it to the trash, which is -not visible to rclone but can be seen in a web browser. The trashed file -still occupies part of total quota. If you wish to empty your trash and -free some quota, you can use the rclone cleanup remote: command, which -will permanently delete all your trashed files. This command does not -take any path arguments. + Token server url. -Quota information + Leave blank to use the provider defaults. -To view your current quota you can use the rclone about remote: command -which will display your usage limit (quota) and the current usage. + Properties: -Restricted filename characters + - Config: token_url + - Env Var: RCLONE_MAILRU_TOKEN_URL + - Type: string + - Required: false -In addition to the default restricted characters set the following -characters are also replaced: + #### --mailru-speedup-file-patterns - Character Value Replacement - ----------- ------- ------------- - " 0x22 " - * 0x2A * - : 0x3A : - < 0x3C < - > 0x3E > - ? 0x3F ? - \ 0x5C \ - | 0x7C | + Comma separated list of file name patterns eligible for speedup (put by hash). -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + Patterns are case insensitive and can contain '*' or '?' meta characters. -Standard options + Properties: -Here are the Standard options specific to mailru (Mail.ru Cloud). + - Config: speedup_file_patterns + - Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS + - Type: string + - Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf" + - Examples: + - "" + - Empty list completely disables speedup (put by hash). + - "*" + - All files will be attempted for speedup. + - "*.mkv,*.avi,*.mp4,*.mp3" + - Only common audio/video files will be tried for put by hash. + - "*.zip,*.gz,*.rar,*.pdf" + - Only common archives or PDF books will be tried for speedup. ---mailru-user + #### --mailru-speedup-max-disk -User name (usually email). + This option allows you to disable speedup (put by hash) for large files. -Properties: + Reason is that preliminary hashing can exhaust your RAM or disk space. -- Config: user -- Env Var: RCLONE_MAILRU_USER -- Type: string -- Required: true + Properties: ---mailru-pass + - Config: speedup_max_disk + - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK + - Type: SizeSuffix + - Default: 3Gi + - Examples: + - "0" + - Completely disable speedup (put by hash). + - "1G" + - Files larger than 1Gb will be uploaded directly. + - "3G" + - Choose this option if you have less than 3Gb free on local disk. -Password. + #### --mailru-speedup-max-memory -This must be an app password - rclone will not work with your normal -password. See the Configuration section in the docs for how to make an -app password. + Files larger than the size given below will always be hashed on disk. -NB Input to this must be obscured - see rclone obscure. + Properties: -Properties: + - Config: speedup_max_memory + - Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY + - Type: SizeSuffix + - Default: 32Mi + - Examples: + - "0" + - Preliminary hashing will always be done in a temporary disk location. + - "32M" + - Do not dedicate more than 32Mb RAM for preliminary hashing. + - "256M" + - You have at most 256Mb RAM free for hash calculations. -- Config: pass -- Env Var: RCLONE_MAILRU_PASS -- Type: string -- Required: true + #### --mailru-check-hash ---mailru-speedup-enable + What should copy do if file checksum is mismatched or invalid. -Skip full upload if there is another file with same data hash. + Properties: -This feature is called "speedup" or "put by hash". It is especially -efficient in case of generally available files like popular books, video -or audio clips, because files are searched by hash in all accounts of -all mailru users. It is meaningless and ineffective if source file is -unique or encrypted. Please note that rclone may need local memory and -disk space to calculate content hash in advance and decide whether full -upload is required. Also, if rclone does not know file size in advance -(e.g. in case of streaming or partial uploads), it will not even try -this optimization. + - Config: check_hash + - Env Var: RCLONE_MAILRU_CHECK_HASH + - Type: bool + - Default: true + - Examples: + - "true" + - Fail with error. + - "false" + - Ignore and continue. -Properties: + #### --mailru-user-agent -- Config: speedup_enable -- Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE -- Type: bool -- Default: true -- Examples: - - "true" - - Enable - - "false" - - Disable + HTTP user agent used internally by client. -Advanced options + Defaults to "rclone/VERSION" or "--user-agent" provided on command line. -Here are the Advanced options specific to mailru (Mail.ru Cloud). + Properties: ---mailru-speedup-file-patterns + - Config: user_agent + - Env Var: RCLONE_MAILRU_USER_AGENT + - Type: string + - Required: false -Comma separated list of file name patterns eligible for speedup (put by -hash). + #### --mailru-quirks -Patterns are case insensitive and can contain '*' or '?' meta -characters. + Comma separated list of internal maintenance flags. -Properties: + This option must not be used by an ordinary user. It is intended only to + facilitate remote troubleshooting of backend issues. Strict meaning of + flags is not documented and not guaranteed to persist between releases. + Quirks will be removed when the backend grows stable. + Supported quirks: atomicmkdir binlist unknowndirs -- Config: speedup_file_patterns -- Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS -- Type: string -- Default: ".mkv,.avi,.mp4,.mp3,.zip,.gz,.rar,.pdf" -- Examples: - - "" - - Empty list completely disables speedup (put by hash). - - "*" - - All files will be attempted for speedup. - - ".mkv,.avi,.mp4,.mp3" - - Only common audio/video files will be tried for put by hash. - - ".zip,.gz,.rar,.pdf" - - Only common archives or PDF books will be tried for speedup. + Properties: ---mailru-speedup-max-disk + - Config: quirks + - Env Var: RCLONE_MAILRU_QUIRKS + - Type: string + - Required: false -This option allows you to disable speedup (put by hash) for large files. + #### --mailru-encoding -Reason is that preliminary hashing can exhaust your RAM or disk space. + The encoding for the backend. -Properties: + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -- Config: speedup_max_disk -- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK -- Type: SizeSuffix -- Default: 3Gi -- Examples: - - "0" - - Completely disable speedup (put by hash). - - "1G" - - Files larger than 1Gb will be uploaded directly. - - "3G" - - Choose this option if you have less than 3Gb free on local - disk. + Properties: ---mailru-speedup-max-memory + - Config: encoding + - Env Var: RCLONE_MAILRU_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot -Files larger than the size given below will always be hashed on disk. -Properties: -- Config: speedup_max_memory -- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY -- Type: SizeSuffix -- Default: 32Mi -- Examples: - - "0" - - Preliminary hashing will always be done in a temporary disk - location. - - "32M" - - Do not dedicate more than 32Mb RAM for preliminary hashing. - - "256M" - - You have at most 256Mb RAM free for hash calculations. + ## Limitations ---mailru-check-hash + File size limits depend on your account. A single file size is limited by 2G + for a free account and unlimited for paid tariffs. Please refer to the Mail.ru + site for the total uploaded size limits. -What should copy do if file checksum is mismatched or invalid. + Note that Mailru is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". -Properties: + # Mega -- Config: check_hash -- Env Var: RCLONE_MAILRU_CHECK_HASH -- Type: bool -- Default: true -- Examples: - - "true" - - Fail with error. - - "false" - - Ignore and continue. + [Mega](https://mega.nz/) is a cloud storage and file hosting service + known for its security feature where all files are encrypted locally + before they are uploaded. This prevents anyone (including employees of + Mega) from accessing the files without knowledge of the key used for + encryption. ---mailru-user-agent + This is an rclone backend for Mega which supports the file transfer + features of Mega using the same client side encryption. -HTTP user agent used internally by client. + Paths are specified as `remote:path` -Defaults to "rclone/VERSION" or "--user-agent" provided on command line. + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -Properties: + ## Configuration -- Config: user_agent -- Env Var: RCLONE_MAILRU_USER_AGENT -- Type: string -- Required: false + Here is an example of how to make a remote called `remote`. First run: ---mailru-quirks + rclone config -Comma separated list of internal maintenance flags. + This will guide you through an interactive setup process: -This option must not be used by an ordinary user. It is intended only to -facilitate remote troubleshooting of backend issues. Strict meaning of -flags is not documented and not guaranteed to persist between releases. -Quirks will be removed when the backend grows stable. Supported quirks: -atomicmkdir binlist unknowndirs +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Mega  "mega" [snip] Storage> mega User name user> you@example.com +Password. y) Yes type in my own password g) Generate random password n) +No leave this optional password blank y/g/n> y Enter the password: +password: Confirm the password: password: Remote config +-------------------- [remote] type = mega user = you@example.com pass = +*** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this +remote d) Delete this remote y/e/d> y -Properties: -- Config: quirks -- Env Var: RCLONE_MAILRU_QUIRKS -- Type: string -- Required: false + **NOTE:** The encryption keys need to have been already generated after a regular login + via the browser, otherwise attempting to use the credentials in `rclone` will fail. ---mailru-encoding + Once configured you can then use `rclone` like this, -The encoding for the backend. + List directories in top level of your Mega -See the encoding section in the overview for more info. + rclone lsd remote: -Properties: + List all the files in your Mega -- Config: encoding -- Env Var: RCLONE_MAILRU_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot + rclone ls remote: -Limitations + To copy a local directory to an Mega directory called backup -File size limits depend on your account. A single file size is limited -by 2G for a free account and unlimited for paid tariffs. Please refer to -the Mail.ru site for the total uploaded size limits. + rclone copy /home/source remote:backup -Note that Mailru is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". + ### Modified time and hashes -Mega + Mega does not support modification times or hashes yet. -Mega is a cloud storage and file hosting service known for its security -feature where all files are encrypted locally before they are uploaded. -This prevents anyone (including employees of Mega) from accessing the -files without knowledge of the key used for encryption. + ### Restricted filename characters -This is an rclone backend for Mega which supports the file transfer -features of Mega using the same client side encryption. + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | NUL | 0x00 | ␀ | + | / | 0x2F | / | -Paths are specified as remote:path + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Paths may be as deep as required, e.g. remote:directory/subdirectory. + ### Duplicated files -Configuration + Mega can have two files with exactly the same name and path (unlike a + normal file system). -Here is an example of how to make a remote called remote. First run: + Duplicated files cause problems with the syncing and you will see + messages in the log about duplicates. - rclone config + Use `rclone dedupe` to fix duplicated files. -This will guide you through an interactive setup process: + ### Failure to log-in + + #### Object not found + + If you are connecting to your Mega remote for the first time, + to test access and synchronization, you may receive an error such as + +Failed to create file system for "my-mega-remote:": couldn't login: +Object (typically, node or user) not found + + + The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega) + start with the **MEGAcmd** utility. Note that this refers to + the official C++ command from https://github.com/meganz/MEGAcmd + and not the go language built command from t3rm1n4l/megacmd + that is no longer maintained. + + Follow the instructions for installing MEGAcmd and try accessing + your remote as they recommend. You can establish whether or not + you can log in using MEGAcmd, and obtain diagnostic information + to help you, and search or work with others in the forum. + +MEGA CMD> login me@example.com Password: Fetching nodes ... Loading +transfers from local cache Login complete as me@example.com +me@example.com:/$ + + + Note that some have found issues with passwords containing special + characters. If you can not log on with rclone, but MEGAcmd logs on + just fine, then consider changing your password temporarily to + pure alphanumeric characters, in case that helps. + + + #### Repeated commands blocks access + + Mega remotes seem to get blocked (reject logins) under "heavy use". + We haven't worked out the exact blocking rules but it seems to be + related to fast paced, successive rclone commands. + + For example, executing this command 90 times in a row `rclone link + remote:file` will cause the remote to become "blocked". This is not an + abnormal situation, for example if you wish to get the public links of + a directory with hundred of files... After more or less a week, the + remote will remote accept rclone logins normally again. + + You can mitigate this issue by mounting the remote it with `rclone + mount`. This will log-in when mounting and a log-out when unmounting + only. You can also run `rclone rcd` and then use `rclone rc` to run + the commands over the API to avoid logging in each time. + + Rclone does not currently close mega sessions (you can see them in the + web interface), however closing the sessions does not solve the issue. + + If you space rclone commands by 3 seconds it will avoid blocking the + remote. We haven't identified the exact blocking rules, so perhaps one + could execute the command 80 times without waiting and avoid blocking + by waiting 3 seconds, then continuing... + + Note that this has been observed by trial and error and might not be + set in stone. + + Other tools seem not to produce this blocking effect, as they use a + different working approach (state-based, using sessionIDs instead of + log-in) which isn't compatible with the current stateless rclone + approach. + + Note that once blocked, the use of other tools (such as megacmd) is + not a sure workaround: following megacmd login times have been + observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 + min, 30min. Web access looks unaffected though. + + Investigation is continuing in relation to workarounds based on + timeouts, pacers, retrials and tpslimits - if you discover something + relevant, please post on the forum. + + So, if rclone was working nicely and suddenly you are unable to log-in + and you are sure the user and the password are correct, likely you + have got the remote blocked for a while. + + + ### Standard options + + Here are the Standard options specific to mega (Mega). + + #### --mega-user + + User name. + + Properties: + + - Config: user + - Env Var: RCLONE_MEGA_USER + - Type: string + - Required: true + + #### --mega-pass - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Mega - \ "mega" - [snip] - Storage> mega - User name - user> you@example.com Password. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank - y/g/n> y - Enter the password: - password: - Confirm the password: - password: - Remote config - -------------------- - [remote] - type = mega - user = you@example.com - pass = *** ENCRYPTED *** - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y -NOTE: The encryption keys need to have been already generated after a -regular login via the browser, otherwise attempting to use the -credentials in rclone will fail. + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Once configured you can then use rclone like this, + Properties: -List directories in top level of your Mega + - Config: pass + - Env Var: RCLONE_MEGA_PASS + - Type: string + - Required: true - rclone lsd remote: + ### Advanced options -List all the files in your Mega + Here are the Advanced options specific to mega (Mega). - rclone ls remote: + #### --mega-debug -To copy a local directory to an Mega directory called backup + Output more debug from Mega. - rclone copy /home/source remote:backup + If this flag is set (along with -vv) it will print further debugging + information from the mega backend. -Modified time and hashes + Properties: -Mega does not support modification times or hashes yet. + - Config: debug + - Env Var: RCLONE_MEGA_DEBUG + - Type: bool + - Default: false -Restricted filename characters + #### --mega-hard-delete - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - / 0x2F / + Delete files permanently rather than putting them into the trash. -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + Normally the mega backend will put all deletions into the trash rather + than permanently deleting them. If you specify this then rclone will + permanently delete objects instead. -Duplicated files + Properties: -Mega can have two files with exactly the same name and path (unlike a -normal file system). + - Config: hard_delete + - Env Var: RCLONE_MEGA_HARD_DELETE + - Type: bool + - Default: false -Duplicated files cause problems with the syncing and you will see -messages in the log about duplicates. + #### --mega-use-https -Use rclone dedupe to fix duplicated files. + Use HTTPS for transfers. -Failure to log-in + MEGA uses plain text HTTP connections by default. + Some ISPs throttle HTTP connections, this causes transfers to become very slow. + Enabling this will force MEGA to use HTTPS for all transfers. + HTTPS is normally not necessary since all data is already encrypted anyway. + Enabling it will increase CPU usage and add network overhead. -Object not found + Properties: -If you are connecting to your Mega remote for the first time, to test -access and synchronization, you may receive an error such as + - Config: use_https + - Env Var: RCLONE_MEGA_USE_HTTPS + - Type: bool + - Default: false - Failed to create file system for "my-mega-remote:": - couldn't login: Object (typically, node or user) not found + #### --mega-encoding -The diagnostic steps often recommended in the rclone forum start with -the MEGAcmd utility. Note that this refers to the official C++ command -from https://github.com/meganz/MEGAcmd and not the go language built -command from t3rm1n4l/megacmd that is no longer maintained. + The encoding for the backend. -Follow the instructions for installing MEGAcmd and try accessing your -remote as they recommend. You can establish whether or not you can log -in using MEGAcmd, and obtain diagnostic information to help you, and -search or work with others in the forum. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - MEGA CMD> login me@example.com - Password: - Fetching nodes ... - Loading transfers from local cache - Login complete as me@example.com - me@example.com:/$ + Properties: -Note that some have found issues with passwords containing special -characters. If you can not log on with rclone, but MEGAcmd logs on just -fine, then consider changing your password temporarily to pure -alphanumeric characters, in case that helps. + - Config: encoding + - Env Var: RCLONE_MEGA_ENCODING + - Type: MultiEncoder + - Default: Slash,InvalidUtf8,Dot -Repeated commands blocks access -Mega remotes seem to get blocked (reject logins) under "heavy use". We -haven't worked out the exact blocking rules but it seems to be related -to fast paced, successive rclone commands. -For example, executing this command 90 times in a row -rclone link remote:file will cause the remote to become "blocked". This -is not an abnormal situation, for example if you wish to get the public -links of a directory with hundred of files... After more or less a week, -the remote will remote accept rclone logins normally again. + ### Process `killed` -You can mitigate this issue by mounting the remote it with rclone mount. -This will log-in when mounting and a log-out when unmounting only. You -can also run rclone rcd and then use rclone rc to run the commands over -the API to avoid logging in each time. + On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). -Rclone does not currently close mega sessions (you can see them in the -web interface), however closing the sessions does not solve the issue. + ## Limitations -If you space rclone commands by 3 seconds it will avoid blocking the -remote. We haven't identified the exact blocking rules, so perhaps one -could execute the command 80 times without waiting and avoid blocking by -waiting 3 seconds, then continuing... + This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource + go library implementing the Mega API. There doesn't appear to be any + documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code + so there are likely quite a few errors still remaining in this library. -Note that this has been observed by trial and error and might not be set -in stone. + Mega allows duplicate files which may confuse rclone. -Other tools seem not to produce this blocking effect, as they use a -different working approach (state-based, using sessionIDs instead of -log-in) which isn't compatible with the current stateless rclone -approach. + # Memory -Note that once blocked, the use of other tools (such as megacmd) is not -a sure workaround: following megacmd login times have been observed in -succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. -Web access looks unaffected though. + The memory backend is an in RAM backend. It does not persist its + data - use the local backend for that. -Investigation is continuing in relation to workarounds based on -timeouts, pacers, retrials and tpslimits - if you discover something -relevant, please post on the forum. + The memory backend behaves like a bucket-based remote (e.g. like + s3). Because it has no parameters you can just use it with the + `:memory:` remote name. -So, if rclone was working nicely and suddenly you are unable to log-in -and you are sure the user and the password are correct, likely you have -got the remote blocked for a while. + ## Configuration -Standard options + You can configure it as a remote like this with `rclone config` too if + you want to: -Here are the Standard options specific to mega (Mega). +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / Memory + "memory" [snip] Storage> memory ** See help for memory backend at: +https://rclone.org/memory/ ** ---mega-user +Remote config -User name. + --------------- + [remote] + type = memory + --------------- -Properties: +y) Yes this is OK (default) +z) Edit this remote +a) Delete this remote y/e/d> y -- Config: user -- Env Var: RCLONE_MEGA_USER -- Type: string -- Required: true ---mega-pass + Because the memory backend isn't persistent it is most useful for + testing or with an rclone server or rclone mount, e.g. -Password. + rclone mount :memory: /mnt/tmp + rclone serve webdav :memory: + rclone serve sftp :memory: -NB Input to this must be obscured - see rclone obscure. + ### Modified time and hashes -Properties: + The memory backend supports MD5 hashes and modification times accurate to 1 nS. -- Config: pass -- Env Var: RCLONE_MEGA_PASS -- Type: string -- Required: true + ### Restricted filename characters -Advanced options + The memory backend replaces the [default restricted characters + set](https://rclone.org/overview/#restricted-characters). -Here are the Advanced options specific to mega (Mega). ---mega-debug -Output more debug from Mega. -If this flag is set (along with -vv) it will print further debugging -information from the mega backend. + # Akamai NetStorage -Properties: + Paths are specified as `remote:` + You may put subdirectories in too, e.g. `remote:/path/to/dir`. + If you have a CP code you can use that as the folder after the domain such as \\/\\/\. -- Config: debug -- Env Var: RCLONE_MEGA_DEBUG -- Type: bool -- Default: false + For example, this is commonly configured with or without a CP code: + * **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` + * **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` ---mega-hard-delete -Delete files permanently rather than putting them into the trash. + See all buckets + rclone lsd remote: + The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process. -Normally the mega backend will put all deletions into the trash rather -than permanently deleting them. If you specify this then rclone will -permanently delete objects instead. + ## Configuration -Properties: + Here's an example of how to make a remote called `ns1`. -- Config: hard_delete -- Env Var: RCLONE_MEGA_HARD_DELETE -- Type: bool -- Default: false + 1. To begin the interactive configuration process, enter this command: ---mega-use-https +rclone config -Use HTTPS for transfers. -MEGA uses plain text HTTP connections by default. Some ISPs throttle -HTTP connections, this causes transfers to become very slow. Enabling -this will force MEGA to use HTTPS for all transfers. HTTPS is normally -not necessary since all data is already encrypted anyway. Enabling it -will increase CPU usage and add network overhead. + 2. Type `n` to create a new remote. -Properties: +n) New remote +o) Delete remote +p) Quit config e/n/d/q> n -- Config: use_https -- Env Var: RCLONE_MEGA_USE_HTTPS -- Type: bool -- Default: false ---mega-encoding + 3. For this example, enter `ns1` when you reach the name> prompt. -The encoding for the backend. +name> ns1 -See the encoding section in the overview for more info. -Properties: + 4. Enter `netstorage` as the type of storage to configure. -- Config: encoding -- Env Var: RCLONE_MEGA_ENCODING -- Type: MultiEncoder -- Default: Slash,InvalidUtf8,Dot +Type of storage to configure. Enter a string value. Press Enter for the +default (""). Choose a number from below, or type in your own value XX / +NetStorage  "netstorage" Storage> netstorage -Limitations -This backend uses the go-mega go library which is an opensource go -library implementing the Mega API. There doesn't appear to be any -documentation for the mega protocol beyond the mega C++ SDK source code -so there are likely quite a few errors still remaining in this library. + 5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. -Mega allows duplicate files which may confuse rclone. +Enter a string value. Press Enter for the default (""). Choose a number +from below, or type in your own value 1 / HTTP protocol  "http" 2 / +HTTPS protocol  "https" protocol> 1 -Memory -The memory backend is an in RAM backend. It does not persist its data - -use the local backend for that. + 6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `///` -The memory backend behaves like a bucket-based remote (e.g. like s3). -Because it has no parameters you can just use it with the :memory: -remote name. +Enter a string value. Press Enter for the default (""). host> +baseball-nsu.akamaihd.net/123456/content/ -Configuration -You can configure it as a remote like this with rclone config too if you -want to: + 7. Set the netstorage account name - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Memory - \ "memory" - [snip] - Storage> memory - ** See help for memory backend at: https://rclone.org/memory/ ** +Enter a string value. Press Enter for the default (""). account> +username - Remote config - -------------------- - [remote] - type = memory - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y + 8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret. + Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption. -Because the memory backend isn't persistent it is most useful for -testing or with an rclone server or rclone mount, e.g. +y) Yes type in my own password +z) Generate random password y/g> y Enter the password: password: + Confirm the password: password: - rclone mount :memory: /mnt/tmp - rclone serve webdav :memory: - rclone serve sftp :memory: -Modified time and hashes + 9. View the summary and confirm your remote configuration. -The memory backend supports MD5 hashes and modification times accurate -to 1 nS. +[ns1] type = netstorage protocol = http host = +baseball-nsu.akamaihd.net/123456/content/ account = username secret = +*** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) +Edit this remote d) Delete this remote y/e/d> y -Restricted filename characters -The memory backend replaces the default restricted characters set. + This remote is called `ns1` and can now be used. -Akamai NetStorage + ## Example operations -Paths are specified as remote: You may put subdirectories in too, e.g. -remote:/path/to/dir. If you have a CP code you can use that as the -folder after the domain such as //. + Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/. -For example, this is commonly configured with or without a CP code: * -With a CP code. -[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a -CP code. [your-domain-prefix]-nsu.akamaihd.net + ### See contents of a directory in your project -See all buckets rclone lsd remote: The initial setup for Netstorage -involves getting an account and secret. Use rclone config to walk you -through the setup process. + rclone lsd ns1:/974012/testing/ -Configuration + ### Sync the contents local with remote -Here's an example of how to make a remote called ns1. + rclone sync . ns1:/974012/testing/ -1. To begin the interactive configuration process, enter this command: + ### Upload local content to remote + rclone copy notes.txt ns1:/974012/testing/ - rclone config + ### Delete content on remote + rclone delete ns1:/974012/testing/notes.txt -2. Type n to create a new remote. + ### Move or copy content between CP codes. - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n + Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes. -3. For this example, enter ns1 when you reach the name> prompt. + rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ - name> ns1 + ## Features -4. Enter netstorage as the type of storage to configure. + ### Symlink Support - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - XX / NetStorage - \ "netstorage" - Storage> netstorage + The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote. -5. Select between the HTTP or HTTPS protocol. Most users should choose - HTTPS, which is the default. HTTP is provided primarily for - debugging purposes. + This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / HTTP protocol - \ "http" - 2 / HTTPS protocol - \ "https" - protocol> 1 + Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink. -6. Specify your NetStorage host, CP code, and any necessary content - paths using this format: /// + **Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote. - Enter a string value. Press Enter for the default (""). - host> baseball-nsu.akamaihd.net/123456/content/ + ### Implicit vs. Explicit Directories -7. Set the netstorage account name + With NetStorage, directories can exist in one of two forms: - Enter a string value. Press Enter for the default (""). - account> username + 1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group. + 2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file. -8. Set the Netstorage account secret/G2O key which will be used for - authentication purposes. Select the y option to set your own - password then enter your secret. Note: The secret is stored in the - rclone.conf file with hex-encoded encryption. + Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: + ### `--fast-list` / ListR support -9. View the summary and confirm your remote configuration. + NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered. - [ns1] - type = netstorage - protocol = http - host = baseball-nsu.akamaihd.net/123456/content/ - account = username - secret = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y + * **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects. -This remote is called ns1 and can now be used. + * **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the `--fast-list` option. -Example operations + There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster. -Get started with rclone and NetStorage with these examples. For -additional rclone commands, visit https://rclone.org/commands/. + **Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output. -See contents of a directory in your project + ### Purge - rclone lsd ns1:/974012/testing/ + NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method. -Sync the contents local with remote + **Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible. - rclone sync . ns1:/974012/testing/ -Upload local content to remote + ### Standard options - rclone copy notes.txt ns1:/974012/testing/ + Here are the Standard options specific to netstorage (Akamai NetStorage). -Delete content on remote + #### --netstorage-host - rclone delete ns1:/974012/testing/notes.txt + Domain+path of NetStorage host to connect to. -Move or copy content between CP codes. + Format should be `/` -Your credentials must have access to two CP codes on the same remote. -You can't perform operations between different remotes. + Properties: - rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ + - Config: host + - Env Var: RCLONE_NETSTORAGE_HOST + - Type: string + - Required: true -Features + #### --netstorage-account -Symlink Support + Set the NetStorage account name -The Netstorage backend changes the rclone --links, -l behavior. When -uploading, instead of creating the .rclonelink file, use the "symlink" -API in order to create the corresponding symlink on the remote. The -.rclonelink file will not be created, the upload will be intercepted and -only the symlink file that matches the source file name with no suffix -will be created on the remote. + Properties: -This will effectively allow commands like copy/copyto, move/moveto and -sync to upload from local to remote and download from remote to local -directories with symlinks. Due to internal rclone limitations, it is not -possible to upload an individual symlink file to any remote backend. You -can always use the "backend symlink" command to create a symlink on the -NetStorage server, refer to "symlink" section below. + - Config: account + - Env Var: RCLONE_NETSTORAGE_ACCOUNT + - Type: string + - Required: true -Individual symlink files on the remote can be used with the commands -like "cat" to print the destination name, or "delete" to delete symlink, -or copy, copy/to and move/moveto to download from the remote to local. -Note: individual symlink files on the remote should be specified -including the suffix .rclonelink. + #### --netstorage-secret -Note: No file with the suffix .rclonelink should ever exist on the -server since it is not possible to actually upload/create a file with -.rclonelink suffix with rclone, it can only exist if it is manually -created through a non-rclone method on the remote. + Set the NetStorage account secret/G2O key for authentication. -Implicit vs. Explicit Directories + Please choose the 'y' option to set your own password then enter your secret. -With NetStorage, directories can exist in one of two forms: + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -1. Explicit Directory. This is an actual, physical directory that you - have created in a storage group. -2. Implicit Directory. This refers to a directory within a path that - has not been physically created. For example, during upload of a - file, nonexistent subdirectories can be specified in the target - path. NetStorage creates these as "implicit." While the directories - aren't physically created, they exist implicitly and the noted path - is connected with the uploaded file. + Properties: -Rclone will intercept all file uploads and mkdir commands for the -NetStorage remote and will explicitly issue the mkdir command for each -directory in the uploading path. This will help with the -interoperability with the other Akamai services such as SFTP and the -Content Management Shell (CMShell). Rclone will not guarantee -correctness of operations with implicit directories which might have -been created as a result of using an upload API directly. + - Config: secret + - Env Var: RCLONE_NETSTORAGE_SECRET + - Type: string + - Required: true ---fast-list / ListR support + ### Advanced options -NetStorage remote supports the ListR feature by using the "list" -NetStorage API action to return a lexicographical list of all objects -within the specified CP code, recursing into subdirectories as they're -encountered. + Here are the Advanced options specific to netstorage (Akamai NetStorage). -- Rclone will use the ListR method for some commands by default. - Commands such as lsf -R will use ListR by default. To disable this, - include the --disable listR option to use the non-recursive method - of listing objects. + #### --netstorage-protocol -- Rclone will not use the ListR method for some commands. Commands - such as sync don't use ListR by default. To force using the ListR - method, include the --fast-list option. + Select between HTTP or HTTPS protocol. -There are pros and cons of using the ListR method, refer to rclone -documentation. In general, the sync command over an existing deep tree -on the remote will run faster with the "--fast-list" flag but with extra -memory usage as a side effect. It might also result in higher CPU -utilization but the whole task can be completed faster. + Most users should choose HTTPS, which is the default. + HTTP is provided primarily for debugging purposes. -Note: There is a known limitation that "lsf -R" will display number of -files in the directory and directory size as -1 when ListR method is -used. The workaround is to pass "--disable listR" flag if these numbers -are important in the output. + Properties: -Purge + - Config: protocol + - Env Var: RCLONE_NETSTORAGE_PROTOCOL + - Type: string + - Default: "https" + - Examples: + - "http" + - HTTP protocol + - "https" + - HTTPS protocol -NetStorage remote supports the purge feature by using the "quick-delete" -NetStorage API action. The quick-delete action is disabled by default -for security reasons and can be enabled for the account through the -Akamai portal. Rclone will first try to use quick-delete action for the -purge command and if this functionality is disabled then will fall back -to a standard delete method. + ## Backend commands -Note: Read the NetStorage Usage API for considerations when using -"quick-delete". In general, using quick-delete method will not delete -the tree immediately and objects targeted for quick-delete may still be -accessible. + Here are the commands specific to the netstorage backend. -Standard options + Run them with -Here are the Standard options specific to netstorage (Akamai -NetStorage). + rclone backend COMMAND remote: ---netstorage-host + The help below will explain what arguments each command takes. -Domain+path of NetStorage host to connect to. + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. -Format should be / + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). -Properties: + ### du -- Config: host -- Env Var: RCLONE_NETSTORAGE_HOST -- Type: string -- Required: true + Return disk usage information for a specified directory ---netstorage-account + rclone backend du remote: [options] [+] -Set the NetStorage account name + The usage information returned, includes the targeted directory as well as all + files stored in any sub-directories that may exist. -Properties: + ### symlink -- Config: account -- Env Var: RCLONE_NETSTORAGE_ACCOUNT -- Type: string -- Required: true + You can create a symbolic link in ObjectStore with the symlink action. ---netstorage-secret + rclone backend symlink remote: [options] [+] -Set the NetStorage account secret/G2O key for authentication. + The desired path location (including applicable sub-directories) ending in + the object that will be the target of the symlink (for example, /links/mylink). + Include the file extension for the object, if applicable. + `rclone backend symlink ` -Please choose the 'y' option to set your own password then enter your -secret. -NB Input to this must be obscured - see rclone obscure. -Properties: + # Microsoft Azure Blob Storage -- Config: secret -- Env Var: RCLONE_NETSTORAGE_SECRET -- Type: string -- Required: true + Paths are specified as `remote:container` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. + `remote:container/path/to/dir`. -Advanced options + ## Configuration -Here are the Advanced options specific to netstorage (Akamai -NetStorage). + Here is an example of making a Microsoft Azure Blob Storage + configuration. For a remote called `remote`. First run: ---netstorage-protocol + rclone config -Select between HTTP or HTTPS protocol. + This will guide you through an interactive setup process: -Most users should choose HTTPS, which is the default. HTTP is provided -primarily for debugging purposes. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Microsoft Azure Blob Storage  "azureblob" [snip] Storage> azureblob +Storage Account Name account> account_name Storage Account Key key> +base64encodedkey== Endpoint for the service - leave blank normally. +endpoint> Remote config -------------------- [remote] account = +account_name key = base64encodedkey== endpoint = -------------------- y) +Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -Properties: -- Config: protocol -- Env Var: RCLONE_NETSTORAGE_PROTOCOL -- Type: string -- Default: "https" -- Examples: - - "http" - - HTTP protocol - - "https" - - HTTPS protocol + See all containers -Backend commands + rclone lsd remote: -Here are the commands specific to the netstorage backend. + Make a new container -Run them with + rclone mkdir remote:container - rclone backend COMMAND remote: + List the contents of a container -The help below will explain what arguments each command takes. + rclone ls remote:container -See the backend command for more info on how to pass options and -arguments. + Sync `/home/local/directory` to the remote container, deleting any excess + files in the container. -These can be run on a running backend using the rc command -backend/command. + rclone sync --interactive /home/local/directory remote:container -du + ### --fast-list -Return disk usage information for a specified directory + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. - rclone backend du remote: [options] [+] + ### Modified time -The usage information returned, includes the targeted directory as well -as all files stored in any sub-directories that may exist. + The modified time is stored as metadata on the object with the `mtime` + key. It is stored using RFC3339 Format time with nanosecond + precision. The metadata is supplied during directory listings so + there is no performance overhead to using it. -symlink + If you wish to use the Azure standard `LastModified` time stored on + the object as the modified time, then use the `--use-server-modtime` + flag. Note that rclone can't set `LastModified`, so using the + `--update` flag when syncing is recommended if using + `--use-server-modtime`. -You can create a symbolic link in ObjectStore with the symlink action. + ### Performance - rclone backend symlink remote: [options] [+] + When uploading large files, increasing the value of + `--azureblob-upload-concurrency` will increase performance at the cost + of using more memory. The default of 16 is set quite conservatively to + use less memory. It maybe be necessary raise it to 64 or higher to + fully utilize a 1 GBit/s link with a single file transfer. -The desired path location (including applicable sub-directories) ending -in the object that will be the target of the symlink (for example, -/links/mylink). Include the file extension for the object, if -applicable. rclone backend symlink + ### Restricted filename characters -Microsoft Azure Blob Storage + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: -Paths are specified as remote:container (or remote: for the lsd -command.) You may put subdirectories in too, e.g. -remote:container/path/to/dir. + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | / | 0x2F | / | + | \ | 0x5C | \ | -Configuration + File names can also not end with the following characters. + These only get replaced if they are the last character in the name: -Here is an example of making a Microsoft Azure Blob Storage -configuration. For a remote called remote. First run: + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | . | 0x2E | . | - rclone config + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -This will guide you through an interactive setup process: + ### Hashes - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Microsoft Azure Blob Storage - \ "azureblob" - [snip] - Storage> azureblob - Storage Account Name - account> account_name - Storage Account Key - key> base64encodedkey== - Endpoint for the service - leave blank normally. - endpoint> - Remote config - -------------------- - [remote] - account = account_name - key = base64encodedkey== - endpoint = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + MD5 hashes are stored with blobs. However blobs that were uploaded in + chunks only have an MD5 if the source remote was capable of MD5 + hashes, e.g. the local disk. -See all containers + ### Authentication {#authentication} - rclone lsd remote: + There are a number of ways of supplying credentials for Azure Blob + Storage. Rclone tries them in the order of the sections below. -Make a new container + #### Env Auth - rclone mkdir remote:container + If the `env_auth` config parameter is `true` then rclone will pull + credentials from the environment or runtime. -List the contents of a container + It tries these authentication methods in this order: - rclone ls remote:container + 1. Environment Variables + 2. Managed Service Identity Credentials + 3. Azure CLI credentials (as used by the az tool) -Sync /home/local/directory to the remote container, deleting any excess -files in the container. + These are described in the following sections - rclone sync --interactive /home/local/directory remote:container + ##### Env Auth: 1. Environment Variables ---fast-list + If `env_auth` is set and environment variables are present rclone + authenticates a service principal with a secret or certificate, or a + user with a password, depending on which environment variable are set. + It reads configuration from these variables, in the following order: -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. + 1. Service principal with client secret + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_CLIENT_ID`: the service principal's client ID + - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets + 2. Service principal with certificate + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_CLIENT_ID`: the service principal's client ID + - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key. + - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file. + - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. + 3. User with username and password + - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations". + - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to + - `AZURE_USERNAME`: a username (usually an email address) + - `AZURE_PASSWORD`: the user's password + 4. Workload Identity + - `AZURE_TENANT_ID`: Tenant to authenticate in. + - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to. + - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file. + - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). -Modified time -The modified time is stored as metadata on the object with the mtime -key. It is stored using RFC3339 Format time with nanosecond precision. -The metadata is supplied during directory listings so there is no -performance overhead to using it. + ##### Env Auth: 2. Managed Service Identity Credentials -If you wish to use the Azure standard LastModified time stored on the -object as the modified time, then use the --use-server-modtime flag. -Note that rclone can't set LastModified, so using the --update flag when -syncing is recommended if using --use-server-modtime. + When using Managed Service Identity if the VM(SS) on which this + program is running has a system-assigned identity, it will be used by + default. If the resource has no system-assigned but exactly one + user-assigned identity, the user-assigned identity will be used by + default. -Performance - -When uploading large files, increasing the value of ---azureblob-upload-concurrency will increase performance at the cost of -using more memory. The default of 16 is set quite conservatively to use -less memory. It maybe be necessary raise it to 64 or higher to fully -utilize a 1 GBit/s link with a single file transfer. + If the resource has multiple user-assigned identities you will need to + unset `env_auth` and set `use_msi` instead. See the [`use_msi` + section](#use_msi). -Restricted filename characters + ##### Env Auth: 3. Azure CLI credentials (as used by the az tool) -In addition to the default restricted characters set the following -characters are also replaced: + Credentials created with the `az` tool can be picked up using `env_auth`. - Character Value Replacement - ----------- ------- ------------- - / 0x2F / - \ 0x5C \ + For example if you were to login with a service principal like this: -File names can also not end with the following characters. These only -get replaced if they are the last character in the name: + az login --service-principal -u XXX -p XXX --tenant XXX - Character Value Replacement - ----------- ------- ------------- - . 0x2E . + Then you could access rclone resources like this: -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER -Hashes + Or -MD5 hashes are stored with blobs. However blobs that were uploaded in -chunks only have an MD5 if the source remote was capable of MD5 hashes, -e.g. the local disk. + rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER -Authentication + Which is analogous to using the `az` tool: -There are a number of ways of supplying credentials for Azure Blob -Storage. Rclone tries them in the order of the sections below. + az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login -Env Auth + #### Account and Shared Key -If the env_auth config parameter is true then rclone will pull -credentials from the environment or runtime. + This is the most straight forward and least flexible way. Just fill + in the `account` and `key` lines and leave the rest blank. -It tries these authentication methods in this order: + #### SAS URL -1. Environment Variables -2. Managed Service Identity Credentials -3. Azure CLI credentials (as used by the az tool) + This can be an account level SAS URL or container level SAS URL. -These are described in the following sections + To use it leave `account` and `key` blank and fill in `sas_url`. -Env Auth: 1. Environment Variables + An account level SAS URL or container level SAS URL can be obtained + from the Azure portal or the Azure Storage Explorer. To get a + container level SAS URL right click on a container in the Azure Blob + explorer in the Azure portal. -If env_auth is set and environment variables are present rclone -authenticates a service principal with a secret or certificate, or a -user with a password, depending on which environment variable are set. -It reads configuration from these variables, in the following order: + If you use a container level SAS URL, rclone operations are permitted + only on a particular container, e.g. -1. Service principal with client secret - - AZURE_TENANT_ID: ID of the service principal's tenant. Also - called its "directory" ID. - - AZURE_CLIENT_ID: the service principal's client ID - - AZURE_CLIENT_SECRET: one of the service principal's client - secrets -2. Service principal with certificate - - AZURE_TENANT_ID: ID of the service principal's tenant. Also - called its "directory" ID. - - AZURE_CLIENT_ID: the service principal's client ID - - AZURE_CLIENT_CERTIFICATE_PATH: path to a PEM or PKCS12 - certificate file including the private key. - - AZURE_CLIENT_CERTIFICATE_PASSWORD: (optional) password for the - certificate file. - - AZURE_CLIENT_SEND_CERTIFICATE_CHAIN: (optional) Specifies - whether an authentication request will include an x5c header to - support subject name / issuer based authentication. When set to - "true" or "1", authentication requests include the x5c header. -3. User with username and password - - AZURE_TENANT_ID: (optional) tenant to authenticate in. Defaults - to "organizations". - - AZURE_CLIENT_ID: client ID of the application the user will - authenticate to - - AZURE_USERNAME: a username (usually an email address) - - AZURE_PASSWORD: the user's password -4. Workload Identity - - AZURE_TENANT_ID: Tenant to authenticate in. - - AZURE_CLIENT_ID: Client ID of the application the user will - authenticate to. - - AZURE_FEDERATED_TOKEN_FILE: Path to projected service account - token file. - - AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory - endpoint (default: login.microsoftonline.com). + rclone ls azureblob:container -Env Auth: 2. Managed Service Identity Credentials + You can also list the single container from the root. This will only + show the container specified by the SAS URL. -When using Managed Service Identity if the VM(SS) on which this program -is running has a system-assigned identity, it will be used by default. -If the resource has no system-assigned but exactly one user-assigned -identity, the user-assigned identity will be used by default. + $ rclone lsd azureblob: + container/ -If the resource has multiple user-assigned identities you will need to -unset env_auth and set use_msi instead. See the use_msi section. + Note that you can't see or access any other containers - this will + fail -Env Auth: 3. Azure CLI credentials (as used by the az tool) + rclone ls azureblob:othercontainer -Credentials created with the az tool can be picked up using env_auth. + Container level SAS URLs are useful for temporarily allowing third + parties access to a single container or putting credentials into an + untrusted environment such as a CI build server. -For example if you were to login with a service principal like this: + #### Service principal with client secret - az login --service-principal -u XXX -p XXX --tenant XXX + If these variables are set, rclone will authenticate with a service principal with a client secret. -Then you could access rclone resources like this: + - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. + - `client_id`: the service principal's client ID + - `client_secret`: one of the service principal's client secrets - rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER + The credentials can also be placed in a file using the + `service_principal_file` configuration option. -Or + #### Service principal with certificate - rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER + If these variables are set, rclone will authenticate with a service principal with certificate. -Which is analogous to using the az tool: + - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. + - `client_id`: the service principal's client ID + - `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key. + - `client_certificate_password`: (optional) password for the certificate file. + - `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. - az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login + **NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Account and Shared Key + #### User with username and password -This is the most straight forward and least flexible way. Just fill in -the account and key lines and leave the rest blank. + If these variables are set, rclone will authenticate with username and password. -SAS URL + - `tenant`: (optional) tenant to authenticate in. Defaults to "organizations". + - `client_id`: client ID of the application the user will authenticate to + - `username`: a username (usually an email address) + - `password`: the user's password -This can be an account level SAS URL or container level SAS URL. + Microsoft doesn't recommend this kind of authentication, because it's + less secure than other authentication flows. This method is not + interactive, so it isn't compatible with any form of multi-factor + authentication, and the application must already have user or admin + consent. This credential can only authenticate work and school + accounts; it can't authenticate Microsoft accounts. -To use it leave account and key blank and fill in sas_url. + **NB** `password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -An account level SAS URL or container level SAS URL can be obtained from -the Azure portal or the Azure Storage Explorer. To get a container level -SAS URL right click on a container in the Azure Blob explorer in the -Azure portal. + #### Managed Service Identity Credentials {#use_msi} -If you use a container level SAS URL, rclone operations are permitted -only on a particular container, e.g. + If `use_msi` is set then managed service identity credentials are + used. This authentication only works when running in an Azure service. + `env_auth` needs to be unset to use this. - rclone ls azureblob:container + However if you have multiple user identities to choose from these must + be explicitly specified using exactly one of the `msi_object_id`, + `msi_client_id`, or `msi_mi_res_id` parameters. -You can also list the single container from the root. This will only -show the container specified by the SAS URL. + If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is + set, this is is equivalent to using `env_auth`. - $ rclone lsd azureblob: - container/ -Note that you can't see or access any other containers - this will fail + ### Standard options - rclone ls azureblob:othercontainer + Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). -Container level SAS URLs are useful for temporarily allowing third -parties access to a single container or putting credentials into an -untrusted environment such as a CI build server. + #### --azureblob-account -Service principal with client secret + Azure Storage Account Name. -If these variables are set, rclone will authenticate with a service -principal with a client secret. + Set this to the Azure Storage Account Name in use. -- tenant: ID of the service principal's tenant. Also called its - "directory" ID. -- client_id: the service principal's client ID -- client_secret: one of the service principal's client secrets + Leave blank to use SAS URL or Emulator, otherwise it needs to be set. -The credentials can also be placed in a file using the -service_principal_file configuration option. + If this is blank and if env_auth is set it will be read from the + environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible. -Service principal with certificate -If these variables are set, rclone will authenticate with a service -principal with certificate. + Properties: -- tenant: ID of the service principal's tenant. Also called its - "directory" ID. -- client_id: the service principal's client ID -- client_certificate_path: path to a PEM or PKCS12 certificate file - including the private key. -- client_certificate_password: (optional) password for the certificate + - Config: account + - Env Var: RCLONE_AZUREBLOB_ACCOUNT + - Type: string + - Required: false + + #### --azureblob-env-auth + + Read credentials from runtime (environment variables, CLI or MSI). + + See the [authentication docs](/azureblob#authentication) for full info. + + Properties: + + - Config: env_auth + - Env Var: RCLONE_AZUREBLOB_ENV_AUTH + - Type: bool + - Default: false + + #### --azureblob-key + + Storage Account Shared Key. + + Leave blank to use SAS URL or Emulator. + + Properties: + + - Config: key + - Env Var: RCLONE_AZUREBLOB_KEY + - Type: string + - Required: false + + #### --azureblob-sas-url + + SAS URL for container level access only. + + Leave blank if using account/key or Emulator. + + Properties: + + - Config: sas_url + - Env Var: RCLONE_AZUREBLOB_SAS_URL + - Type: string + - Required: false + + #### --azureblob-tenant + + ID of the service principal's tenant. Also called its directory ID. + + Set this if using + - Service principal with client secret + - Service principal with certificate + - User with username and password + + + Properties: + + - Config: tenant + - Env Var: RCLONE_AZUREBLOB_TENANT + - Type: string + - Required: false + + #### --azureblob-client-id + + The ID of the client in use. + + Set this if using + - Service principal with client secret + - Service principal with certificate + - User with username and password + + + Properties: + + - Config: client_id + - Env Var: RCLONE_AZUREBLOB_CLIENT_ID + - Type: string + - Required: false + + #### --azureblob-client-secret + + One of the service principal's client secrets + + Set this if using + - Service principal with client secret + + + Properties: + + - Config: client_secret + - Env Var: RCLONE_AZUREBLOB_CLIENT_SECRET + - Type: string + - Required: false + + #### --azureblob-client-certificate-path + + Path to a PEM or PKCS12 certificate file including the private key. + + Set this if using + - Service principal with certificate + + + Properties: + + - Config: client_certificate_path + - Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH + - Type: string + - Required: false + + #### --azureblob-client-certificate-password + + Password for the certificate file (optional). + + Optionally set this if using + - Service principal with certificate + + And the certificate has a password. + + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: client_certificate_password + - Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). + + #### --azureblob-client-send-certificate-chain + + Send the certificate chain when using certificate auth. + + Specifies whether an authentication request will include an x5c header + to support subject name / issuer based authentication. When set to + true, authentication requests include the x5c header. + + Optionally set this if using + - Service principal with certificate + + + Properties: + + - Config: client_send_certificate_chain + - Env Var: RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN + - Type: bool + - Default: false + + #### --azureblob-username + + User name (usually an email address) + + Set this if using + - User with username and password + + + Properties: + + - Config: username + - Env Var: RCLONE_AZUREBLOB_USERNAME + - Type: string + - Required: false + + #### --azureblob-password + + The user's password + + Set this if using + - User with username and password + + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_AZUREBLOB_PASSWORD + - Type: string + - Required: false + + #### --azureblob-service-principal-file + + Path to file containing credentials for use with a service principal. + + Leave blank normally. Needed only if you want to use a service principal instead of interactive login. + + $ az ad sp create-for-rbac --name "" \ + --role "Storage Blob Data Owner" \ + --scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \ + > azure-principal.json + + See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details. + + It may be more convenient to put the credentials directly into the + rclone config file under the `client_id`, `tenant` and `client_secret` + keys instead of setting `service_principal_file`. + + + Properties: + + - Config: service_principal_file + - Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE + - Type: string + - Required: false + + #### --azureblob-use-msi + + Use a managed service identity to authenticate (only works in Azure). + + When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/) + to authenticate to Azure Storage instead of a SAS token or account key. + + If the VM(SS) on which this program is running has a system-assigned identity, it will + be used by default. If the resource has no system-assigned but exactly one user-assigned identity, + the user-assigned identity will be used by default. If the resource has multiple user-assigned + identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, + msi_client_id, or msi_mi_res_id parameters. + + Properties: + + - Config: use_msi + - Env Var: RCLONE_AZUREBLOB_USE_MSI + - Type: bool + - Default: false + + #### --azureblob-msi-object-id + + Object ID of the user-assigned MSI to use, if any. + + Leave blank if msi_client_id or msi_mi_res_id specified. + + Properties: + + - Config: msi_object_id + - Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID + - Type: string + - Required: false + + #### --azureblob-msi-client-id + + Object ID of the user-assigned MSI to use, if any. + + Leave blank if msi_object_id or msi_mi_res_id specified. + + Properties: + + - Config: msi_client_id + - Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID + - Type: string + - Required: false + + #### --azureblob-msi-mi-res-id + + Azure resource ID of the user-assigned MSI to use, if any. + + Leave blank if msi_client_id or msi_object_id specified. + + Properties: + + - Config: msi_mi_res_id + - Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID + - Type: string + - Required: false + + #### --azureblob-use-emulator + + Uses local storage emulator if provided as 'true'. + + Leave blank if using real azure storage endpoint. + + Properties: + + - Config: use_emulator + - Env Var: RCLONE_AZUREBLOB_USE_EMULATOR + - Type: bool + - Default: false + + #### --azureblob-endpoint + + Endpoint for the service. + + Leave blank normally. + + Properties: + + - Config: endpoint + - Env Var: RCLONE_AZUREBLOB_ENDPOINT + - Type: string + - Required: false + + #### --azureblob-upload-cutoff + + Cutoff for switching to chunked upload (<= 256 MiB) (deprecated). + + Properties: + + - Config: upload_cutoff + - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF + - Type: string + - Required: false + + #### --azureblob-chunk-size + + Upload chunk size. + + Note that this is stored in memory and there may be up to + "--transfers" * "--azureblob-upload-concurrency" chunks stored at once + in memory. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE + - Type: SizeSuffix + - Default: 4Mi + + #### --azureblob-upload-concurrency + + Concurrency for multipart uploads. + + This is the number of chunks of the same file that are uploaded + concurrently. + + If you are uploading small numbers of large files over high-speed + links and these uploads do not fully utilize your bandwidth, then + increasing this may help to speed up the transfers. + + In tests, upload speed increases almost linearly with upload + concurrency. For example to fill a gigabit pipe it may be necessary to + raise this to 64. Note that this will use more memory. + + Note that chunks are stored in memory and there may be up to + "--transfers" * "--azureblob-upload-concurrency" chunks stored at once + in memory. + + Properties: + + - Config: upload_concurrency + - Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY + - Type: int + - Default: 16 + + #### --azureblob-list-chunk + + Size of blob list. + + This sets the number of blobs requested in each listing chunk. Default + is the maximum, 5000. "List blobs" requests are permitted 2 minutes + per megabyte to complete. If an operation is taking longer than 2 + minutes per megabyte on average, it will time out ( + [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) + ). This can be used to limit the number of blobs items to return, to + avoid the time out. + + Properties: + + - Config: list_chunk + - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK + - Type: int + - Default: 5000 + + #### --azureblob-access-tier + + Access tier of blob: hot, cool or archive. + + Archived blobs can be restored by setting access tier to hot or + cool. Leave blank if you intend to use default access tier, which is + set at account level + + If there is no "access tier" specified, rclone doesn't apply any tier. + rclone performs "Set Tier" operation on blobs while uploading, if objects + are not modified, specifying "access tier" to new one will have no effect. + If blobs are in "archive tier" at remote, trying to perform data transfer + operations from remote will not be allowed. User should first restore by + tiering blob to "Hot" or "Cool". + + Properties: + + - Config: access_tier + - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER + - Type: string + - Required: false + + #### --azureblob-archive-tier-delete + + Delete archive tier blobs before overwriting. + + Archive tier blobs cannot be updated. So without this flag, if you + attempt to update an archive tier blob, then rclone will produce the + error: + + can't update archive tier blob without --azureblob-archive-tier-delete + + With this flag set then before rclone attempts to overwrite an archive + tier blob, it will delete the existing blob before uploading its + replacement. This has the potential for data loss if the upload fails + (unlike updating a normal blob) and also may cost more since deleting + archive tier blobs early may be chargable. + + + Properties: + + - Config: archive_tier_delete + - Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE + - Type: bool + - Default: false + + #### --azureblob-disable-checksum + + Don't store MD5 checksum with object metadata. + + Normally rclone will calculate the MD5 checksum of the input before + uploading it so it can add it to metadata on the object. This is great + for data integrity checking but can cause long delays for large files + to start uploading. + + Properties: + + - Config: disable_checksum + - Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM + - Type: bool + - Default: false + + #### --azureblob-memory-pool-flush-time + + How often internal memory buffer pools will be flushed. (no longer used) + + Properties: + + - Config: memory_pool_flush_time + - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME + - Type: Duration + - Default: 1m0s + + #### --azureblob-memory-pool-use-mmap + + Whether to use mmap buffers in internal memory pool. (no longer used) + + Properties: + + - Config: memory_pool_use_mmap + - Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP + - Type: bool + - Default: false + + #### --azureblob-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_AZUREBLOB_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 + + #### --azureblob-public-access + + Public access level of a container: blob or container. + + Properties: + + - Config: public_access + - Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS + - Type: string + - Required: false + - Examples: + - "" + - The container and its blobs can be accessed only with an authorized request. + - It's a default value. + - "blob" + - Blob data within this container can be read via anonymous request. + - "container" + - Allow full public read access for container and blob data. + + #### --azureblob-directory-markers + + Upload an empty object with a trailing slash when a new directory is created + + Empty folders are unsupported for bucket based remotes, this option + creates an empty object ending with "/", to persist the folder. + + This object also has the metadata "hdi_isfolder = true" to conform to + the Microsoft standard. + + + Properties: + + - Config: directory_markers + - Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS + - Type: bool + - Default: false + + #### --azureblob-no-check-container + + If set, don't attempt to check the container exists or create it. + + This can be useful when trying to minimise the number of transactions + rclone does if you know the container exists already. + + + Properties: + + - Config: no_check_container + - Env Var: RCLONE_AZUREBLOB_NO_CHECK_CONTAINER + - Type: bool + - Default: false + + #### --azureblob-no-head-object + + If set, do not do HEAD before GET when getting objects. + + Properties: + + - Config: no_head_object + - Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT + - Type: bool + - Default: false + + + + ### Custom upload headers + + You can set custom upload headers with the `--header-upload` flag. + + - Cache-Control + - Content-Disposition + - Content-Encoding + - Content-Language + - Content-Type + + Eg `--header-upload "Content-Type: text/potato"` + + ## Limitations + + MD5 sums are only uploaded with chunked files if the source has an MD5 + sum. This will always be the case for a local to azure copy. + + `rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. + + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + + ## Azure Storage Emulator Support + + You can run rclone with the storage emulator (usually _azurite_). + + To do this, just set up a new remote with `rclone config` following + the instructions in the introduction and set `use_emulator` in the + advanced settings as `true`. You do not need to provide a default + account name nor an account key. But you can override them in the + `account` and `key` options. (Prior to v1.61 they were hard coded to + _azurite_'s `devstoreaccount1`.) + + Also, if you want to access a storage emulator instance running on a + different machine, you can override the `endpoint` parameter in the + advanced settings, setting it to + `http(s)://:/devstoreaccount1` + (e.g. `http://10.254.2.5:10000/devstoreaccount1`). + + # Microsoft OneDrive + + Paths are specified as `remote:path` + + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + + ## Configuration + + The initial setup for OneDrive involves getting a token from + Microsoft which you need to do in your browser. `rclone config` walks + you through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +e) Edit existing remote +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> n name> remote Type of storage to + configure. Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value [snip] XX / + Microsoft OneDrive  "onedrive" [snip] Storage> onedrive Microsoft + App Client Id Leave blank normally. Enter a string value. Press + Enter for the default (""). client_id> Microsoft App Client Secret + Leave blank normally. Enter a string value. Press Enter for the + default (""). client_secret> Edit advanced config? (y/n) +l) Yes +m) No y/n> n Remote config Use web browser to automatically + authenticate rclone with remote? + +- Say Y if the machine running rclone has a web browser you can use +- Say N if running rclone on a (remote) machine without web browser + access If not sure try Y. If Y failed, try N. + +y) Yes +z) No y/n> y If your browser doesn't open automatically go to the + following link: http://127.0.0.1:53682/auth Log in and authorize + rclone for access Waiting for code... Got code Choose a number from + below, or type in an existing value 1 / OneDrive Personal or + Business  "onedrive" 2 / Sharepoint site  "sharepoint" 3 / Type in + driveID  "driveid" 4 / Type in SiteID  "siteid" 5 / Search a + Sharepoint site  "search" Your choice> 1 Found 1 drives, please + select the one you want to use: 0: OneDrive (business) + id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose + drive to use:> 0 Found drive 'root' of type 'business', URL: + https://org-my.sharepoint.com/personal/you/Documents Is that okay? +a) Yes +b) No y/n> y -------------------- [remote] type = onedrive token = + {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} + drive_id = + b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk + drive_type = business -------------------- +c) Yes this is OK +d) Edit this remote +e) Delete this remote y/e/d> y + + + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. + + Note that rclone runs a webserver on your local machine to collect the + token as returned from Microsoft. This only runs from the moment it + opens your browser to the moment you get back the verification + code. This is on `http://127.0.0.1:53682/` and this it may require + you to unblock it temporarily if you are running a host firewall. + + Once configured you can then use `rclone` like this, + + List directories in top level of your OneDrive + + rclone lsd remote: + + List all the files in your OneDrive + + rclone ls remote: + + To copy a local directory to an OneDrive directory called backup + + rclone copy /home/source remote:backup + + ### Getting your own Client ID and Key + + rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config. + The default Client ID and Key are shared by all rclone users when performing requests. + + You may choose to create and use your own Client ID, in case the default one does not work well for you. + For example, you might see throttling. + + #### Creating Client ID for OneDrive Personal + + To create your own Client ID, please follow these steps: + + 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. + 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. + 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). + 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. + 5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom. + + Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. + Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. + + The access_scopes option allows you to configure the permissions requested by rclone. + See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes. + + The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options. + + #### Creating Client ID for OneDrive Business + + The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. + A common error is that the publisher of the App is not verified. + + You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below. + + 1. Make sure to create the App with your business account. + 2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App. + 3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization. + 4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`. + 5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`. + + Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). + + + ### Modification time and hashes + + OneDrive allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. + + OneDrive Personal, OneDrive for Business and Sharepoint Server support + [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). + + Before rclone 1.62 the default hash for Onedrive Personal was `SHA1`. + For rclone 1.62 and above the default for all Onedrive backends is + `QuickXorHash`. + + Starting from July 2023 `SHA1` support is being phased out in Onedrive + Personal in favour of `QuickXorHash`. If necessary the + `--onedrive-hash-type` flag (or `hash_type` config option) can be used + to select `SHA1` during the transition period if this is important + your workflow. + + For all types of OneDrive you can use the `--checksum` flag. + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | " | 0x22 | " | + | * | 0x2A | * | + | : | 0x3A | : | + | < | 0x3C | < | + | > | 0x3E | > | + | ? | 0x3F | ? | + | \ | 0x5C | \ | + | \| | 0x7C | | | + + File names can also not end with the following characters. + These only get replaced if they are the last character in the name: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + | . | 0x2E | . | + + File names can also not begin with the following characters. + These only get replaced if they are the first character in the name: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + | ~ | 0x7E | ~ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + ### Deleting files + + Any files you delete with rclone will end up in the trash. Microsoft + doesn't provide an API to permanently delete files, nor to empty the + trash, so you will have to do that with one of Microsoft's apps or via + the OneDrive website. + + + ### Standard options + + Here are the Standard options specific to onedrive (Microsoft OneDrive). + + #### --onedrive-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_ONEDRIVE_CLIENT_ID + - Type: string + - Required: false + + #### --onedrive-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET + - Type: string + - Required: false + + #### --onedrive-region + + Choose national cloud region for OneDrive. + + Properties: + + - Config: region + - Env Var: RCLONE_ONEDRIVE_REGION + - Type: string + - Default: "global" + - Examples: + - "global" + - Microsoft Cloud Global + - "us" + - Microsoft Cloud for US Government + - "de" + - Microsoft Cloud Germany + - "cn" + - Azure and Office 365 operated by Vnet Group in China + + ### Advanced options + + Here are the Advanced options specific to onedrive (Microsoft OneDrive). + + #### --onedrive-token + + OAuth Access Token as a JSON blob. + + Properties: + + - Config: token + - Env Var: RCLONE_ONEDRIVE_TOKEN + - Type: string + - Required: false + + #### --onedrive-auth-url + + Auth server URL. + + Leave blank to use the provider defaults. + + Properties: + + - Config: auth_url + - Env Var: RCLONE_ONEDRIVE_AUTH_URL + - Type: string + - Required: false + + #### --onedrive-token-url + + Token server url. + + Leave blank to use the provider defaults. + + Properties: + + - Config: token_url + - Env Var: RCLONE_ONEDRIVE_TOKEN_URL + - Type: string + - Required: false + + #### --onedrive-chunk-size + + Chunk size to upload files with - must be multiple of 320k (327,680 bytes). + + Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and + should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\" + Note that the chunks will be buffered into memory. + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 10Mi + + #### --onedrive-drive-id + + The ID of the drive to use. + + Properties: + + - Config: drive_id + - Env Var: RCLONE_ONEDRIVE_DRIVE_ID + - Type: string + - Required: false + + #### --onedrive-drive-type + + The type of the drive (personal | business | documentLibrary). + + Properties: + + - Config: drive_type + - Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE + - Type: string + - Required: false + + #### --onedrive-root-folder-id + + ID of the root folder. + + This isn't normally needed, but in special circumstances you might + know the folder ID that you wish to access but not be able to get + there through a path traversal. + + + Properties: + + - Config: root_folder_id + - Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID + - Type: string + - Required: false + + #### --onedrive-access-scopes + + Set scopes to be requested by rclone. + + Choose or manually enter a custom space separated list with all scopes, that rclone should request. + + + Properties: + + - Config: access_scopes + - Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES + - Type: SpaceSepList + - Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access + - Examples: + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true + + #### --onedrive-disable-site-permission + + Disable the request for Sites.Read.All permission. + + If set to true, you will no longer be able to search for a SharePoint site when + configuring drive ID, because rclone will not request Sites.Read.All permission. + Set it to true if your organization didn't assign Sites.Read.All permission to the + application, and your organization disallows users to consent app permission + request on their own. + + Properties: + + - Config: disable_site_permission + - Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION + - Type: bool + - Default: false + + #### --onedrive-expose-onenote-files + + Set to make OneNote files show up in directory listings. + + By default, rclone will hide OneNote files in directory listings because + operations like "Open" and "Update" won't work on them. But this + behaviour may also prevent you from deleting them. If you want to + delete OneNote files or otherwise want them to show up in directory + listing, set this option. + + Properties: + + - Config: expose_onenote_files + - Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES + - Type: bool + - Default: false + + #### --onedrive-server-side-across-configs + + Deprecated: use --server-side-across-configs instead. + + Allow server-side operations (e.g. copy) to work across different onedrive configs. + + This will only work if you are copying between two OneDrive *Personal* drives AND + the files to copy are already shared between them. In other cases, rclone will + fall back to normal copy (which will be slightly slower). + + Properties: + + - Config: server_side_across_configs + - Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS + - Type: bool + - Default: false + + #### --onedrive-list-chunk + + Size of listing chunk. + + Properties: + + - Config: list_chunk + - Env Var: RCLONE_ONEDRIVE_LIST_CHUNK + - Type: int + - Default: 1000 + + #### --onedrive-no-versions + + Remove all versions on modifying operations. + + Onedrive for business creates versions when rclone uploads new files + overwriting an existing one and when it sets the modification time. + + These versions take up space out of the quota. + + This flag checks for versions after file upload and setting + modification time and removes all but the last version. + + **NB** Onedrive personal can't currently delete versions so don't use + this flag there. + + + Properties: + + - Config: no_versions + - Env Var: RCLONE_ONEDRIVE_NO_VERSIONS + - Type: bool + - Default: false + + #### --onedrive-link-scope + + Set the scope of the links created by the link command. + + Properties: + + - Config: link_scope + - Env Var: RCLONE_ONEDRIVE_LINK_SCOPE + - Type: string + - Default: "anonymous" + - Examples: + - "anonymous" + - Anyone with the link has access, without needing to sign in. + - This may include people outside of your organization. + - Anonymous link support may be disabled by an administrator. + - "organization" + - Anyone signed into your organization (tenant) can use the link to get access. + - Only available in OneDrive for Business and SharePoint. + + #### --onedrive-link-type + + Set the type of the links created by the link command. + + Properties: + + - Config: link_type + - Env Var: RCLONE_ONEDRIVE_LINK_TYPE + - Type: string + - Default: "view" + - Examples: + - "view" + - Creates a read-only link to the item. + - "edit" + - Creates a read-write link to the item. + - "embed" + - Creates an embeddable link to the item. + + #### --onedrive-link-password + + Set the password for links created by the link command. + + At the time of writing this only works with OneDrive personal paid accounts. + + + Properties: + + - Config: link_password + - Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD + - Type: string + - Required: false + + #### --onedrive-hash-type + + Specify the hash in use for the backend. + + This specifies the hash type in use. If set to "auto" it will use the + default hash which is QuickXorHash. + + Before rclone 1.62 an SHA1 hash was used by default for Onedrive + Personal. For 1.62 and later the default is to use a QuickXorHash for + all onedrive types. If an SHA1 hash is desired then set this option + accordingly. + + From July 2023 QuickXorHash will be the only available hash for + both OneDrive for Business and OneDriver Personal. + + This can be set to "none" to not use any hashes. + + If the hash requested does not exist on the object, it will be + returned as an empty string which is treated as a missing hash by + rclone. + + + Properties: + + - Config: hash_type + - Env Var: RCLONE_ONEDRIVE_HASH_TYPE + - Type: string + - Default: "auto" + - Examples: + - "auto" + - Rclone chooses the best hash + - "quickxor" + - QuickXor + - "sha1" + - SHA1 + - "sha256" + - SHA256 + - "crc32" + - CRC32 + - "none" + - None - don't use any hashes + + #### --onedrive-av-override + + Allows download of files the server thinks has a virus. + + The onedrive/sharepoint server may check files uploaded with an Anti + Virus checker. If it detects any potential viruses or malware it will + block download of the file. + + In this case you will see a message like this + + server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: + + If you are 100% sure you want to download this file anyway then use + the --onedrive-av-override flag, or av_override = true in the config file. -- client_send_certificate_chain: (optional) Specifies whether an - authentication request will include an x5c header to support subject - name / issuer based authentication. When set to "true" or "1", - authentication requests include the x5c header. -NB client_certificate_password must be obscured - see rclone obscure. -User with username and password + Properties: -If these variables are set, rclone will authenticate with username and -password. + - Config: av_override + - Env Var: RCLONE_ONEDRIVE_AV_OVERRIDE + - Type: bool + - Default: false -- tenant: (optional) tenant to authenticate in. Defaults to - "organizations". -- client_id: client ID of the application the user will authenticate - to -- username: a username (usually an email address) -- password: the user's password + #### --onedrive-encoding -Microsoft doesn't recommend this kind of authentication, because it's -less secure than other authentication flows. This method is not -interactive, so it isn't compatible with any form of multi-factor -authentication, and the application must already have user or admin -consent. This credential can only authenticate work and school accounts; -it can't authenticate Microsoft accounts. + The encoding for the backend. -NB password must be obscured - see rclone obscure. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Managed Service Identity Credentials + Properties: -If use_msi is set then managed service identity credentials are used. -This authentication only works when running in an Azure service. -env_auth needs to be unset to use this. + - Config: encoding + - Env Var: RCLONE_ONEDRIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot -However if you have multiple user identities to choose from these must -be explicitly specified using exactly one of the msi_object_id, -msi_client_id, or msi_mi_res_id parameters. -If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this -is is equivalent to using env_auth. -Standard options + ## Limitations -Here are the Standard options specific to azureblob (Microsoft Azure -Blob Storage). + If you don't use rclone for 90 days the refresh token will + expire. This will result in authorization problems. This is easy to + fix by running the `rclone config reconnect remote:` command to get a + new token and refresh token. ---azureblob-account + ### Naming -Azure Storage Account Name. + Note that OneDrive is case insensitive so you can't have a + file called "Hello.doc" and one called "hello.doc". -Set this to the Azure Storage Account Name in use. + There are quite a few characters that can't be in OneDrive file + names. These can't occur on Windows platforms, but on non-Windows + platforms they are common. Rclone will map these names to and from an + identical looking unicode equivalent. For example if a file has a `?` + in it will be mapped to `?` instead. -Leave blank to use SAS URL or Emulator, otherwise it needs to be set. + ### File sizes -If this is blank and if env_auth is set it will be read from the -environment variable AZURE_STORAGE_ACCOUNT_NAME if possible. + The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). -Properties: + ### Path length -- Config: account -- Env Var: RCLONE_AZUREBLOB_ACCOUNT -- Type: string -- Required: false + The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. ---azureblob-env-auth + ### Number of files -Read credentials from runtime (environment variables, CLI or MSI). + OneDrive seems to be OK with at least 50,000 files in a folder, but at + 100,000 rclone will get errors listing the directory like `couldn’t + list files: UnknownError:`. See + [#2707](https://github.com/rclone/rclone/issues/2707) for more info. -See the authentication docs for full info. + An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). -Properties: + ## Versions -- Config: env_auth -- Env Var: RCLONE_AZUREBLOB_ENV_AUTH -- Type: bool -- Default: false + Every change in a file OneDrive causes the service to create a new + version of the file. This counts against a users quota. For + example changing the modification time of a file creates a second + version, so the file apparently uses twice the space. ---azureblob-key + For example the `copy` command is affected by this as rclone copies + the file and then afterwards sets the modification time to match the + source file which uses another version. -Storage Account Shared Key. + You can use the `rclone cleanup` command (see below) to remove all old + versions. -Leave blank to use SAS URL or Emulator. + Or you can set the `no_versions` parameter to `true` and rclone will + remove versions after operations which create new versions. This takes + extra transactions so only enable it if you need it. -Properties: + **Note** At the time of writing Onedrive Personal creates versions + (but not for setting the modification time) but the API for removing + them returns "API not found" so cleanup and `no_versions` should not + be used on Onedrive Personal. -- Config: key -- Env Var: RCLONE_AZUREBLOB_KEY -- Type: string -- Required: false + ### Disabling versioning ---azureblob-sas-url + Starting October 2018, users will no longer be able to + disable versioning by default. This is because Microsoft has brought + an + [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) + to the mechanism. To change this new default setting, a PowerShell + command is required to be run by a SharePoint admin. If you are an + admin, you can run these commands in PowerShell to change that + setting: -SAS URL for container level access only. + 1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) + 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` + 3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) + 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` + 5. `Disconnect-SPOService` (to disconnect from the server) -Leave blank if using account/key or Emulator. + *Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* -Properties: + User [Weropol](https://github.com/Weropol) has found a method to disable + versioning on OneDrive -- Config: sas_url -- Env Var: RCLONE_AZUREBLOB_SAS_URL -- Type: string -- Required: false + 1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. + 2. Click Site settings. + 3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. + 4. Click Customize "Documents". + 5. Click General Settings > Versioning Settings. + 6. Under Document Version History select the option No versioning. + Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. + 7. Apply the changes by clicking OK. + 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) + 9. Restore the versioning settings after using rclone. (Optional) ---azureblob-tenant + ## Cleanup -ID of the service principal's tenant. Also called its directory ID. + OneDrive supports `rclone cleanup` which causes rclone to look through + every file under the path supplied and delete all version but the + current version. Because this involves traversing all the files, then + querying each file for versions it can be quite slow. Rclone does + `--checkers` tests in parallel. The command also supports `--interactive`/`i` + or `--dry-run` which is a great way to see what it would do. -Set this if using - Service principal with client secret - Service -principal with certificate - User with username and password + rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir + rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir -Properties: + **NB** Onedrive personal can't currently delete versions -- Config: tenant -- Env Var: RCLONE_AZUREBLOB_TENANT -- Type: string -- Required: false + ## Troubleshooting ## ---azureblob-client-id + ### Excessive throttling or blocked on SharePoint -The ID of the client in use. + If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"` -Set this if using - Service principal with client secret - Service -principal with certificate - User with username and password + The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) -Properties: + ### Unexpected file size/hash differences on Sharepoint #### -- Config: client_id -- Env Var: RCLONE_AZUREBLOB_CLIENT_ID -- Type: string -- Required: false + It is a + [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) + issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies + uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and + hash checks to fail. There are also other situations that will cause OneDrive to + report inconsistent file sizes. To use rclone with such + affected files on Sharepoint, you + may disable these checks with the following command line arguments: ---azureblob-client-secret +--ignore-checksum --ignore-size -One of the service principal's client secrets -Set this if using - Service principal with client secret + Alternatively, if you have write access to the OneDrive files, it may be possible + to fix this problem for certain files, by attempting the steps below. + Open the web interface for [OneDrive](https://onedrive.live.com) and find the + affected files (which will be in the error messages/log for rclone). Simply click on + each of these files, causing OneDrive to open them on the web. This will cause each + file to be converted in place to a format that is functionally equivalent + but which will no longer trigger the size discrepancy. Once all problematic files + are converted you will no longer need the ignore options above. -Properties: + ### Replacing/deleting existing files on Sharepoint gets "item not found" #### -- Config: client_secret -- Env Var: RCLONE_AZUREBLOB_CLIENT_SECRET -- Type: string -- Required: false + It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue + that Sharepoint (not OneDrive or OneDrive for Business) may return "item not + found" errors when users try to replace or delete uploaded files; this seems to + mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use + the `--backup-dir ` command line argument so rclone moves the + files to be replaced/deleted into a given backup directory (instead of directly + replacing/deleting them). For example, to instruct rclone to move the files into + the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: ---azureblob-client-certificate-path +--backup-dir mysharepoint:rclone-backup-dir -Path to a PEM or PKCS12 certificate file including the private key. -Set this if using - Service principal with certificate + ### access\_denied (AADSTS65005) #### -Properties: +Error: access_denied Code: AADSTS65005 Description: Using application +'rclone' is currently not supported for your organization +[YOUR_ORGANIZATION] because it is in an unmanaged state. An +administrator needs to claim ownership of the company by DNS validation +of [YOUR_ORGANIZATION] before the application rclone can be provisioned. -- Config: client_certificate_path -- Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH -- Type: string -- Required: false ---azureblob-client-certificate-password + This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. -Password for the certificate file (optional). + However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint -Optionally set this if using - Service principal with certificate + ### invalid\_grant (AADSTS50076) #### -And the certificate has a password. +Error: invalid_grant Code: AADSTS50076 Description: Due to a +configuration change made by your administrator, or because you moved to +a new location, you must use multi-factor authentication to access +'...'. -NB Input to this must be obscured - see rclone obscure. -Properties: + If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. -- Config: client_certificate_password -- Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD -- Type: string -- Required: false + ### Invalid request when making public links #### -Advanced options + On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid + request" error. A possible cause is that the organisation admin didn't allow + public links to be made for the organisation/sharepoint library. To fix the + permissions as an admin, take a look at the docs: + [1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), + [2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). -Here are the Advanced options specific to azureblob (Microsoft Azure -Blob Storage). + ### Can not access `Shared` with me files ---azureblob-client-send-certificate-chain + Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: -Send the certificate chain when using certificate auth. + 1. Visit [https://onedrive.live.com](https://onedrive.live.com/) + 2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context + ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)") + 3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file. + ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)") + ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)") -Specifies whether an authentication request will include an x5c header -to support subject name / issuer based authentication. When set to true, -authentication requests include the x5c header. + ### Live Photos uploaded from iOS (small video clips in .heic files) -Optionally set this if using - Service principal with certificate + The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) + of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. + The usage and download of these uploaded Live Photos is unfortunately still work-in-progress + and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows. -Properties: + The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. + Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. + The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. -- Config: client_send_certificate_chain -- Env Var: RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN -- Type: bool -- Default: false + The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this: ---azureblob-username + DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) + DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK + INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) -User name (usually an email address) + These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, + and relies on modification dates being correctly updated on all files in all situations. -Set this if using - User with username and password + The different sizes will also cause `rclone check` to report size errors something like this: -Properties: + ERROR : 20230203_123826234_iOS.heic: sizes differ -- Config: username -- Env Var: RCLONE_AZUREBLOB_USERNAME -- Type: string -- Required: false + These check errors can be suppressed by adding `--ignore-size`. ---azureblob-password + The different sizes will also cause `rclone mount` to fail downloading with an error something like this: -The user's password + ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF -Set this if using - User with username and password + or like this when using `--cache-mode=full`: -NB Input to this must be obscured - see rclone obscure. + INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: + ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: -Properties: + # OpenDrive -- Config: password -- Env Var: RCLONE_AZUREBLOB_PASSWORD -- Type: string -- Required: false + Paths are specified as `remote:path` ---azureblob-service-principal-file + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -Path to file containing credentials for use with a service principal. + ## Configuration -Leave blank normally. Needed only if you want to use a service principal -instead of interactive login. + Here is an example of how to make a remote called `remote`. First run: - $ az ad sp create-for-rbac --name "" \ - --role "Storage Blob Data Owner" \ - --scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \ - > azure-principal.json + rclone config -See "Create an Azure service principal" and "Assign an Azure role for -access to blob data" pages for more details. + This will guide you through an interactive setup process: -It may be more convenient to put the credentials directly into the -rclone config file under the client_id, tenant and client_secret keys -instead of setting service_principal_file. - -Properties: - -- Config: service_principal_file -- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE -- Type: string -- Required: false - ---azureblob-use-msi - -Use a managed service identity to authenticate (only works in Azure). - -When true, use a managed service identity to authenticate to Azure -Storage instead of a SAS token or account key. - -If the VM(SS) on which this program is running has a system-assigned -identity, it will be used by default. If the resource has no -system-assigned but exactly one user-assigned identity, the -user-assigned identity will be used by default. If the resource has -multiple user-assigned identities, the identity to use must be -explicitly specified using exactly one of the msi_object_id, -msi_client_id, or msi_mi_res_id parameters. - -Properties: - -- Config: use_msi -- Env Var: RCLONE_AZUREBLOB_USE_MSI -- Type: bool -- Default: false - ---azureblob-msi-object-id - -Object ID of the user-assigned MSI to use, if any. - -Leave blank if msi_client_id or msi_mi_res_id specified. - -Properties: - -- Config: msi_object_id -- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID -- Type: string -- Required: false - ---azureblob-msi-client-id - -Object ID of the user-assigned MSI to use, if any. - -Leave blank if msi_object_id or msi_mi_res_id specified. - -Properties: - -- Config: msi_client_id -- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID -- Type: string -- Required: false - ---azureblob-msi-mi-res-id - -Azure resource ID of the user-assigned MSI to use, if any. - -Leave blank if msi_client_id or msi_object_id specified. - -Properties: - -- Config: msi_mi_res_id -- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID -- Type: string -- Required: false - ---azureblob-use-emulator - -Uses local storage emulator if provided as 'true'. - -Leave blank if using real azure storage endpoint. - -Properties: - -- Config: use_emulator -- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR -- Type: bool -- Default: false - ---azureblob-endpoint - -Endpoint for the service. - -Leave blank normally. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_AZUREBLOB_ENDPOINT -- Type: string -- Required: false - ---azureblob-upload-cutoff - -Cutoff for switching to chunked upload (<= 256 MiB) (deprecated). - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF -- Type: string -- Required: false - ---azureblob-chunk-size - -Upload chunk size. - -Note that this is stored in memory and there may be up to "--transfers" -* "--azureblob-upload-concurrency" chunks stored at once in memory. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE -- Type: SizeSuffix -- Default: 4Mi - ---azureblob-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -In tests, upload speed increases almost linearly with upload -concurrency. For example to fill a gigabit pipe it may be necessary to -raise this to 64. Note that this will use more memory. - -Note that chunks are stored in memory and there may be up to -"--transfers" * "--azureblob-upload-concurrency" chunks stored at once -in memory. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY -- Type: int -- Default: 16 - ---azureblob-list-chunk - -Size of blob list. - -This sets the number of blobs requested in each listing chunk. Default -is the maximum, 5000. "List blobs" requests are permitted 2 minutes per -megabyte to complete. If an operation is taking longer than 2 minutes -per megabyte on average, it will time out ( source ). This can be used -to limit the number of blobs items to return, to avoid the time out. - -Properties: - -- Config: list_chunk -- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK -- Type: int -- Default: 5000 - ---azureblob-access-tier - -Access tier of blob: hot, cool or archive. - -Archived blobs can be restored by setting access tier to hot or cool. -Leave blank if you intend to use default access tier, which is set at -account level - -If there is no "access tier" specified, rclone doesn't apply any tier. -rclone performs "Set Tier" operation on blobs while uploading, if -objects are not modified, specifying "access tier" to new one will have -no effect. If blobs are in "archive tier" at remote, trying to perform -data transfer operations from remote will not be allowed. User should -first restore by tiering blob to "Hot" or "Cool". - -Properties: - -- Config: access_tier -- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER -- Type: string -- Required: false - ---azureblob-archive-tier-delete - -Delete archive tier blobs before overwriting. - -Archive tier blobs cannot be updated. So without this flag, if you -attempt to update an archive tier blob, then rclone will produce the -error: - - can't update archive tier blob without --azureblob-archive-tier-delete - -With this flag set then before rclone attempts to overwrite an archive -tier blob, it will delete the existing blob before uploading its -replacement. This has the potential for data loss if the upload fails -(unlike updating a normal blob) and also may cost more since deleting -archive tier blobs early may be chargable. - -Properties: - -- Config: archive_tier_delete -- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE -- Type: bool -- Default: false - ---azureblob-disable-checksum - -Don't store MD5 checksum with object metadata. - -Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. This is great -for data integrity checking but can cause long delays for large files to -start uploading. - -Properties: - -- Config: disable_checksum -- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM -- Type: bool -- Default: false - ---azureblob-memory-pool-flush-time - -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use -memory pool for allocations. This option controls how often unused -buffers will be removed from the pool. - -Properties: - -- Config: memory_pool_flush_time -- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME -- Type: Duration -- Default: 1m0s - ---azureblob-memory-pool-use-mmap - -Whether to use mmap buffers in internal memory pool. - -Properties: - -- Config: memory_pool_use_mmap -- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP -- Type: bool -- Default: false - ---azureblob-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_AZUREBLOB_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 - ---azureblob-public-access - -Public access level of a container: blob or container. - -Properties: - -- Config: public_access -- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS -- Type: string -- Required: false -- Examples: - - "" - - The container and its blobs can be accessed only with an - authorized request. - - It's a default value. - - "blob" - - Blob data within this container can be read via anonymous - request. - - "container" - - Allow full public read access for container and blob data. - ---azureblob-directory-markers - -Upload an empty object with a trailing slash when a new directory is -created - -Empty folders are unsupported for bucket based remotes, this option -creates an empty object ending with "/", to persist the folder. - -This object also has the metadata "hdi_isfolder = true" to conform to -the Microsoft standard. - -Properties: - -- Config: directory_markers -- Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS -- Type: bool -- Default: false - ---azureblob-no-check-container - -If set, don't attempt to check the container exists or create it. - -This can be useful when trying to minimise the number of transactions -rclone does if you know the container exists already. - -Properties: - -- Config: no_check_container -- Env Var: RCLONE_AZUREBLOB_NO_CHECK_CONTAINER -- Type: bool -- Default: false - ---azureblob-no-head-object - -If set, do not do HEAD before GET when getting objects. - -Properties: - -- Config: no_head_object -- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT -- Type: bool -- Default: false - -Custom upload headers - -You can set custom upload headers with the --header-upload flag. - -- Cache-Control -- Content-Disposition -- Content-Encoding -- Content-Language -- Content-Type - -Eg --header-upload "Content-Type: text/potato" - -Limitations - -MD5 sums are only uploaded with chunked files if the source has an MD5 -sum. This will always be the case for a local to azure copy. - -rclone about is not supported by the Microsoft Azure Blob storage -backend. Backends without this capability cannot determine free space -for an rclone mount or use policy mfs (most free space) as a member of -an rclone union remote. - -See List of backends that do not support rclone about and rclone about - -Azure Storage Emulator Support - -You can run rclone with the storage emulator (usually azurite). - -To do this, just set up a new remote with rclone config following the -instructions in the introduction and set use_emulator in the advanced -settings as true. You do not need to provide a default account name nor -an account key. But you can override them in the account and key -options. (Prior to v1.61 they were hard coded to azurite's -devstoreaccount1.) - -Also, if you want to access a storage emulator instance running on a -different machine, you can override the endpoint parameter in the -advanced settings, setting it to -http(s)://:/devstoreaccount1 (e.g. -http://10.254.2.5:10000/devstoreaccount1). - -Microsoft OneDrive - -Paths are specified as remote:path - -Paths may be as deep as required, e.g. remote:directory/subdirectory. - -Configuration - -The initial setup for OneDrive involves getting a token from Microsoft -which you need to do in your browser. rclone config walks you through -it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Microsoft OneDrive - \ "onedrive" - [snip] - Storage> onedrive - Microsoft App Client Id - Leave blank normally. - Enter a string value. Press Enter for the default (""). - client_id> - Microsoft App Client Secret - Leave blank normally. - Enter a string value. Press Enter for the default (""). - client_secret> - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - Choose a number from below, or type in an existing value - 1 / OneDrive Personal or Business - \ "onedrive" - 2 / Sharepoint site - \ "sharepoint" - 3 / Type in driveID - \ "driveid" - 4 / Type in SiteID - \ "siteid" - 5 / Search a Sharepoint site - \ "search" - Your choice> 1 - Found 1 drives, please select the one you want to use: - 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk - Chose drive to use:> 0 - Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents - Is that okay? - y) Yes - n) No - y/n> y - -------------------- - [remote] - type = onedrive - token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"} - drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk - drive_type = business - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Microsoft. This only runs from the moment it -opens your browser to the moment you get back the verification code. -This is on http://127.0.0.1:53682/ and this it may require you to -unblock it temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your OneDrive - - rclone lsd remote: - -List all the files in your OneDrive - - rclone ls remote: - -To copy a local directory to an OneDrive directory called backup - - rclone copy /home/source remote:backup - -Getting your own Client ID and Key - -rclone uses a default Client ID when talking to OneDrive, unless a -custom client_id is specified in the config. The default Client ID and -Key are shared by all rclone users when performing requests. - -You may choose to create and use your own Client ID, in case the default -one does not work well for you. For example, you might see throttling. - -Creating Client ID for OneDrive Personal - -To create your own Client ID, please follow these steps: - -1. Open - https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade - and then click New registration. -2. Enter a name for your app, choose account type - Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), - select Web in Redirect URI, then type (do not copy and paste) - http://localhost:53682/ and click Register. Copy and keep the - Application (client) ID under the app name for later use. -3. Under manage select Certificates & secrets, click New client secret. - Enter a description (can be anything) and set Expires to 24 months. - Copy and keep that secret Value for later use (you won't be able to - see this value afterwards). -4. Under manage select API permissions, click Add a permission and - select Microsoft Graph then select delegated permissions. -5. Search and select the following permissions: Files.Read, - Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, - offline_access, User.Read and Sites.Read.All (if custom access - scopes are configured, select the permissions accordingly). Once - selected click Add permissions at the bottom. - -Now the application is complete. Run rclone config to create or edit a -OneDrive remote. Supply the app ID and password as Client ID and Secret, -respectively. rclone will walk you through the remaining steps. - -The access_scopes option allows you to configure the permissions -requested by rclone. See Microsoft Docs for more information about the -different scopes. - -The Sites.Read.All permission is required if you need to search -SharePoint sites when configuring the remote. However, if that -permission is not assigned, you need to exclude Sites.Read.All from your -access scopes or set disable_site_permission option to true in the -advanced options. - -Creating Client ID for OneDrive Business - -The steps for OneDrive Personal may or may not work for OneDrive -Business, depending on the security settings of the organization. A -common error is that the publisher of the App is not verified. - -You may try to verify you account, or try to limit the App to your -organization only, as shown below. - -1. Make sure to create the App with your business account. -2. Follow the steps above to create an App. However, we need a - different account type here: - Accounts in this organizational directory only (*** - Single tenant). - Note that you can also change the account type after creating the - App. -3. Find the tenant ID of your organization. -4. In the rclone config, set auth_url to - https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize. -5. In the rclone config, set token_url to - https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token. - -Note: If you have a special region, you may need a different host in -step 4 and 5. Here are some hints. - -Modification time and hashes - -OneDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -OneDrive Personal, OneDrive for Business and Sharepoint Server support -QuickXorHash. - -Before rclone 1.62 the default hash for Onedrive Personal was SHA1. For -rclone 1.62 and above the default for all Onedrive backends is -QuickXorHash. - -Starting from July 2023 SHA1 support is being phased out in Onedrive -Personal in favour of QuickXorHash. If necessary the ---onedrive-hash-type flag (or hash_type config option) can be used to -select SHA1 during the transition period if this is important your -workflow. - -For all types of OneDrive you can use the --checksum flag. - -Restricted filename characters - -In addition to the default restricted characters set the following -characters are also replaced: - - Character Value Replacement - ----------- ------- ------------- - " 0x22 " - * 0x2A * - : 0x3A : - < 0x3C < - > 0x3E > - ? 0x3F ? - \ 0x5C \ - | 0x7C | - -File names can also not end with the following characters. These only -get replaced if they are the last character in the name: - - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ - . 0x2E . - -File names can also not begin with the following characters. These only -get replaced if they are the first character in the name: - - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ - ~ 0x7E ~ - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Deleting files - -Any files you delete with rclone will end up in the trash. Microsoft -doesn't provide an API to permanently delete files, nor to empty the -trash, so you will have to do that with one of Microsoft's apps or via -the OneDrive website. - -Standard options - -Here are the Standard options specific to onedrive (Microsoft OneDrive). - ---onedrive-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_ONEDRIVE_CLIENT_ID -- Type: string -- Required: false - ---onedrive-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET -- Type: string -- Required: false - ---onedrive-region - -Choose national cloud region for OneDrive. - -Properties: - -- Config: region -- Env Var: RCLONE_ONEDRIVE_REGION -- Type: string -- Default: "global" -- Examples: - - "global" - - Microsoft Cloud Global - - "us" - - Microsoft Cloud for US Government - - "de" - - Microsoft Cloud Germany - - "cn" - - Azure and Office 365 operated by Vnet Group in China - -Advanced options - -Here are the Advanced options specific to onedrive (Microsoft OneDrive). - ---onedrive-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_ONEDRIVE_TOKEN -- Type: string -- Required: false - ---onedrive-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_ONEDRIVE_AUTH_URL -- Type: string -- Required: false - ---onedrive-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_ONEDRIVE_TOKEN_URL -- Type: string -- Required: false - ---onedrive-chunk-size - -Chunk size to upload files with - must be multiple of 320k (327,680 -bytes). - -Above this size files will be chunked - must be multiple of 320k -(327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you -may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: -The request message is too big." Note that the chunks will be buffered -into memory. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 10Mi - ---onedrive-drive-id - -The ID of the drive to use. - -Properties: - -- Config: drive_id -- Env Var: RCLONE_ONEDRIVE_DRIVE_ID -- Type: string -- Required: false - ---onedrive-drive-type - -The type of the drive (personal | business | documentLibrary). - -Properties: - -- Config: drive_type -- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE -- Type: string -- Required: false - ---onedrive-root-folder-id - -ID of the root folder. - -This isn't normally needed, but in special circumstances you might know -the folder ID that you wish to access but not be able to get there -through a path traversal. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID -- Type: string -- Required: false - ---onedrive-access-scopes - -Set scopes to be requested by rclone. - -Choose or manually enter a custom space separated list with all scopes, -that rclone should request. - -Properties: - -- Config: access_scopes -- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES -- Type: SpaceSepList -- Default: Files.Read Files.ReadWrite Files.Read.All - Files.ReadWrite.All Sites.Read.All offline_access -- Examples: - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All - Sites.Read.All offline_access" - - Read and write access to all resources - - "Files.Read Files.Read.All Sites.Read.All offline_access" - - Read only access to all resources - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All - offline_access" - - Read and write access to all resources, without the ability - to browse SharePoint sites. - - Same as if disable_site_permission was set to true - ---onedrive-disable-site-permission - -Disable the request for Sites.Read.All permission. - -If set to true, you will no longer be able to search for a SharePoint -site when configuring drive ID, because rclone will not request -Sites.Read.All permission. Set it to true if your organization didn't -assign Sites.Read.All permission to the application, and your -organization disallows users to consent app permission request on their -own. - -Properties: - -- Config: disable_site_permission -- Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION -- Type: bool -- Default: false - ---onedrive-expose-onenote-files - -Set to make OneNote files show up in directory listings. - -By default, rclone will hide OneNote files in directory listings because -operations like "Open" and "Update" won't work on them. But this -behaviour may also prevent you from deleting them. If you want to delete -OneNote files or otherwise want them to show up in directory listing, -set this option. - -Properties: - -- Config: expose_onenote_files -- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES -- Type: bool -- Default: false - ---onedrive-server-side-across-configs - -Deprecated: use --server-side-across-configs instead. - -Allow server-side operations (e.g. copy) to work across different -onedrive configs. - -This will only work if you are copying between two OneDrive Personal -drives AND the files to copy are already shared between them. In other -cases, rclone will fall back to normal copy (which will be slightly -slower). - -Properties: - -- Config: server_side_across_configs -- Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS -- Type: bool -- Default: false - ---onedrive-list-chunk - -Size of listing chunk. - -Properties: - -- Config: list_chunk -- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK -- Type: int -- Default: 1000 - ---onedrive-no-versions - -Remove all versions on modifying operations. - -Onedrive for business creates versions when rclone uploads new files -overwriting an existing one and when it sets the modification time. - -These versions take up space out of the quota. - -This flag checks for versions after file upload and setting modification -time and removes all but the last version. - -NB Onedrive personal can't currently delete versions so don't use this -flag there. - -Properties: - -- Config: no_versions -- Env Var: RCLONE_ONEDRIVE_NO_VERSIONS -- Type: bool -- Default: false - ---onedrive-link-scope - -Set the scope of the links created by the link command. - -Properties: - -- Config: link_scope -- Env Var: RCLONE_ONEDRIVE_LINK_SCOPE -- Type: string -- Default: "anonymous" -- Examples: - - "anonymous" - - Anyone with the link has access, without needing to sign in. - - This may include people outside of your organization. - - Anonymous link support may be disabled by an administrator. - - "organization" - - Anyone signed into your organization (tenant) can use the - link to get access. - - Only available in OneDrive for Business and SharePoint. - ---onedrive-link-type - -Set the type of the links created by the link command. - -Properties: - -- Config: link_type -- Env Var: RCLONE_ONEDRIVE_LINK_TYPE -- Type: string -- Default: "view" -- Examples: - - "view" - - Creates a read-only link to the item. - - "edit" - - Creates a read-write link to the item. - - "embed" - - Creates an embeddable link to the item. - ---onedrive-link-password - -Set the password for links created by the link command. - -At the time of writing this only works with OneDrive personal paid -accounts. - -Properties: - -- Config: link_password -- Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD -- Type: string -- Required: false - ---onedrive-hash-type - -Specify the hash in use for the backend. - -This specifies the hash type in use. If set to "auto" it will use the -default hash which is QuickXorHash. - -Before rclone 1.62 an SHA1 hash was used by default for Onedrive -Personal. For 1.62 and later the default is to use a QuickXorHash for -all onedrive types. If an SHA1 hash is desired then set this option -accordingly. - -From July 2023 QuickXorHash will be the only available hash for both -OneDrive for Business and OneDriver Personal. - -This can be set to "none" to not use any hashes. - -If the hash requested does not exist on the object, it will be returned -as an empty string which is treated as a missing hash by rclone. - -Properties: - -- Config: hash_type -- Env Var: RCLONE_ONEDRIVE_HASH_TYPE -- Type: string -- Default: "auto" -- Examples: - - "auto" - - Rclone chooses the best hash - - "quickxor" - - QuickXor - - "sha1" - - SHA1 - - "sha256" - - SHA256 - - "crc32" - - CRC32 - - "none" - - None - don't use any hashes - ---onedrive-av-override - -Allows download of files the server thinks has a virus. - -The onedrive/sharepoint server may check files uploaded with an Anti -Virus checker. If it detects any potential viruses or malware it will -block download of the file. - -In this case you will see a message like this - - server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: - -If you are 100% sure you want to download this file anyway then use the ---onedrive-av-override flag, or av_override = true in the config file. - -Properties: - -- Config: av_override -- Env Var: RCLONE_ONEDRIVE_AV_OVERRIDE -- Type: bool -- Default: false - ---onedrive-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_ONEDRIVE_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot - -Limitations - -If you don't use rclone for 90 days the refresh token will expire. This -will result in authorization problems. This is easy to fix by running -the rclone config reconnect remote: command to get a new token and -refresh token. - -Naming - -Note that OneDrive is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -There are quite a few characters that can't be in OneDrive file names. -These can't occur on Windows platforms, but on non-Windows platforms -they are common. Rclone will map these names to and from an identical -looking unicode equivalent. For example if a file has a ? in it will be -mapped to ? instead. - -File sizes - -The largest allowed file size is 250 GiB for both OneDrive Personal and -OneDrive for Business (Updated 13 Jan 2021). - -Path length - -The entire path, including the file name, must contain fewer than 400 -characters for OneDrive, OneDrive for Business and SharePoint Online. If -you are encrypting file and folder names with rclone, you may want to -pay attention to this limitation because the encrypted names are -typically longer than the original ones. - -Number of files - -OneDrive seems to be OK with at least 50,000 files in a folder, but at -100,000 rclone will get errors listing the directory like -couldn’t list files: UnknownError:. See #2707 for more info. - -An official document about the limitations for different types of -OneDrive can be found here. - -Versions - -Every change in a file OneDrive causes the service to create a new -version of the file. This counts against a users quota. For example -changing the modification time of a file creates a second version, so -the file apparently uses twice the space. - -For example the copy command is affected by this as rclone copies the -file and then afterwards sets the modification time to match the source -file which uses another version. - -You can use the rclone cleanup command (see below) to remove all old -versions. - -Or you can set the no_versions parameter to true and rclone will remove -versions after operations which create new versions. This takes extra -transactions so only enable it if you need it. - -Note At the time of writing Onedrive Personal creates versions (but not -for setting the modification time) but the API for removing them returns -"API not found" so cleanup and no_versions should not be used on -Onedrive Personal. - -Disabling versioning - -Starting October 2018, users will no longer be able to disable -versioning by default. This is because Microsoft has brought an update -to the mechanism. To change this new default setting, a PowerShell -command is required to be run by a SharePoint admin. If you are an -admin, you can run these commands in PowerShell to change that setting: - -1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case - you haven't installed this already) -2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking -3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM - (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this - will prompt for your credentials) -4. Set-SPOTenant -EnableMinimumVersionRequirement $False -5. Disconnect-SPOService (to disconnect from the server) - -Below are the steps for normal users to disable versioning. If you don't -see the "No Versioning" option, make sure the above requirements are -met. - -User Weropol has found a method to disable versioning on OneDrive - -1. Open the settings menu by clicking on the gear symbol at the top of - the OneDrive Business page. -2. Click Site settings. -3. Once on the Site settings page, navigate to Site Administration > - Site libraries and lists. -4. Click Customize "Documents". -5. Click General Settings > Versioning Settings. -6. Under Document Version History select the option No versioning. - Note: This will disable the creation of new file versions, but will - not remove any previous versions. Your documents are safe. -7. Apply the changes by clicking OK. -8. Use rclone to upload or modify files. (I also use the - --no-update-modtime flag) -9. Restore the versioning settings after using rclone. (Optional) - -Cleanup - -OneDrive supports rclone cleanup which causes rclone to look through -every file under the path supplied and delete all version but the -current version. Because this involves traversing all the files, then -querying each file for versions it can be quite slow. Rclone does ---checkers tests in parallel. The command also supports --interactive/i -or --dry-run which is a great way to see what it would do. - - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir - -NB Onedrive personal can't currently delete versions - -Troubleshooting - -Excessive throttling or blocked on SharePoint - -If you experience excessive throttling or is being blocked on SharePoint -then it may help to set the user agent explicitly with a flag like this: ---user-agent "ISV|rclone.org|rclone/v1.55.1" - -The specific details can be found in the Microsoft document: Avoid -getting throttled or blocked in SharePoint Online - -Unexpected file size/hash differences on Sharepoint - -It is a known issue that Sharepoint (not OneDrive or OneDrive for -Business) silently modifies uploaded files, mainly Office files (.docx, -.xlsx, etc.), causing file size and hash checks to fail. There are also -other situations that will cause OneDrive to report inconsistent file -sizes. To use rclone with such affected files on Sharepoint, you may -disable these checks with the following command line arguments: - - --ignore-checksum --ignore-size - -Alternatively, if you have write access to the OneDrive files, it may be -possible to fix this problem for certain files, by attempting the steps -below. Open the web interface for OneDrive and find the affected files -(which will be in the error messages/log for rclone). Simply click on -each of these files, causing OneDrive to open them on the web. This will -cause each file to be converted in place to a format that is -functionally equivalent but which will no longer trigger the size -discrepancy. Once all problematic files are converted you will no longer -need the ignore options above. - -Replacing/deleting existing files on Sharepoint gets "item not found" - -It is a known issue that Sharepoint (not OneDrive or OneDrive for -Business) may return "item not found" errors when users try to replace -or delete uploaded files; this seems to mainly affect Office files -(.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a -workaround, you may use the --backup-dir command line -argument so rclone moves the files to be replaced/deleted into a given -backup directory (instead of directly replacing/deleting them). For -example, to instruct rclone to move the files into the directory -rclone-backup-dir on backend mysharepoint, you may use: - - --backup-dir mysharepoint:rclone-backup-dir - -access_denied (AADSTS65005) - - Error: access_denied - Code: AADSTS65005 - Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. - -This means that rclone can't use the OneDrive for Business API with your -account. You can't do much about it, maybe write an email to your -admins. - -However, there are other ways to interact with your OneDrive account. -Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint - -invalid_grant (AADSTS50076) - - Error: invalid_grant - Code: AADSTS50076 - Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. - -If you see the error above after enabling multi-factor authentication -for your account, you can fix it by refreshing your OAuth refresh token. -To do that, run rclone config, and choose to edit your OneDrive backend. -Then, you don't need to actually make any changes until you reach this -question: Already have a token - refresh?. For this question, answer y -and go through the process to refresh your token, just like the first -time the backend is configured. After this, rclone should work again for -this backend. - -Invalid request when making public links - -On Sharepoint and OneDrive for Business, rclone link may return an -"Invalid request" error. A possible cause is that the organisation admin -didn't allow public links to be made for the organisation/sharepoint -library. To fix the permissions as an admin, take a look at the docs: 1, -2. - -Can not access Shared with me files - -Shared with me files is not supported by rclone currently, but there is -a workaround: - -1. Visit https://onedrive.live.com -2. Right click a item in Shared, then click Add shortcut to My files in - the context [make_shortcut] -3. The shortcut will appear in My files, you can access it with rclone, - it behaves like a normal folder/file. [in_my_files] [rclone_mount] - -Live Photos uploaded from iOS (small video clips in .heic files) - -The iOS OneDrive app introduced upload and storage of Live Photos in -2020. The usage and download of these uploaded Live Photos is -unfortunately still work-in-progress and this introduces several issues -when copying, synchronising and mounting – both in rclone and in the -native OneDrive client on Windows. - -The root cause can easily be seen if you locate one of your Live Photos -in the OneDrive web interface. Then download the photo from the web -interface. You will then see that the size of downloaded .heic file is -smaller than the size displayed in the web interface. The downloaded -file is smaller because it only contains a single frame (still photo) -extracted from the Live Photo (movie) stored in OneDrive. - -The different sizes will cause rclone copy/sync to repeatedly recopy -unmodified photos something like this: - - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) - -These recopies can be worked around by adding --ignore-size. Please note -that this workaround only syncs the still-picture not the movie clip, -and relies on modification dates being correctly updated on all files in -all situations. - -The different sizes will also cause rclone check to report size errors -something like this: - - ERROR : 20230203_123826234_iOS.heic: sizes differ - -These check errors can be suppressed by adding --ignore-size. - -The different sizes will also cause rclone mount to fail downloading -with an error something like this: - - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF - -or like this when using --cache-mode=full: - - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - -OpenDrive - -Paths are specified as remote:path - -Paths may be as deep as required, e.g. remote:directory/subdirectory. - -Configuration - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / OpenDrive - \ "opendrive" - [snip] - Storage> opendrive - Username - username> +n) New remote +o) Delete remote +p) Quit config e/n/d/q> n name> remote Type of storage to configure. + Choose a number from below, or type in your own value [snip] XX / + OpenDrive  "opendrive" [snip] Storage> opendrive Username username> Password - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - -------------------- - [remote] - username = - password = *** ENCRYPTED *** - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y +q) Yes type in my own password +r) Generate random password y/g> y Enter the password: password: + Confirm the password: password: -------------------- [remote] + username = password = *** ENCRYPTED *** -------------------- +s) Yes this is OK +t) Edit this remote +u) Delete this remote y/e/d> y -List directories in top level of your OpenDrive - rclone lsd remote: + List directories in top level of your OpenDrive -List all the files in your OpenDrive + rclone lsd remote: - rclone ls remote: + List all the files in your OpenDrive -To copy a local directory to an OpenDrive directory called backup + rclone ls remote: - rclone copy /home/source remote:backup + To copy a local directory to an OpenDrive directory called backup -Modified time and MD5SUMs + rclone copy /home/source remote:backup -OpenDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. + ### Modified time and MD5SUMs -Restricted filename characters + OpenDrive allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - / 0x2F / - " 0x22 " - * 0x2A * - : 0x3A : - < 0x3C < - > 0x3E > - ? 0x3F ? - \ 0x5C \ - | 0x7C | + ### Restricted filename characters -File names can also not begin or end with the following characters. -These only get replaced if they are the first or last character in the -name: + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | NUL | 0x00 | ␀ | + | / | 0x2F | / | + | " | 0x22 | " | + | * | 0x2A | * | + | : | 0x3A | : | + | < | 0x3C | < | + | > | 0x3E | > | + | ? | 0x3F | ? | + | \ | 0x5C | \ | + | \| | 0x7C | | | - Character Value Replacement - ----------- ------- ------------- - SP 0x20 ␠ - HT 0x09 ␉ - LF 0x0A ␊ - VT 0x0B ␋ - CR 0x0D ␍ + File names can also not begin or end with the following characters. + These only get replaced if they are the first or last character in the name: -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | SP | 0x20 | ␠ | + | HT | 0x09 | ␉ | + | LF | 0x0A | ␊ | + | VT | 0x0B | ␋ | + | CR | 0x0D | ␍ | -Standard options -Here are the Standard options specific to opendrive (OpenDrive). + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. ---opendrive-username -Username. + ### Standard options -Properties: + Here are the Standard options specific to opendrive (OpenDrive). -- Config: username -- Env Var: RCLONE_OPENDRIVE_USERNAME -- Type: string -- Required: true + #### --opendrive-username ---opendrive-password + Username. -Password. + Properties: -NB Input to this must be obscured - see rclone obscure. + - Config: username + - Env Var: RCLONE_OPENDRIVE_USERNAME + - Type: string + - Required: true -Properties: + #### --opendrive-password -- Config: password -- Env Var: RCLONE_OPENDRIVE_PASSWORD -- Type: string -- Required: true + Password. -Advanced options + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Here are the Advanced options specific to opendrive (OpenDrive). + Properties: ---opendrive-encoding + - Config: password + - Env Var: RCLONE_OPENDRIVE_PASSWORD + - Type: string + - Required: true -The encoding for the backend. + ### Advanced options -See the encoding section in the overview for more info. + Here are the Advanced options specific to opendrive (OpenDrive). -Properties: + #### --opendrive-encoding -- Config: encoding -- Env Var: RCLONE_OPENDRIVE_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot + The encoding for the backend. ---opendrive-chunk-size + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Files will be uploaded in chunks this size. + Properties: -Note that these chunks are buffered in memory so increasing them will -increase memory use. + - Config: encoding + - Env Var: RCLONE_OPENDRIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot -Properties: + #### --opendrive-chunk-size -- Config: chunk_size -- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 10Mi + Files will be uploaded in chunks this size. -Limitations + Note that these chunks are buffered in memory so increasing them will + increase memory use. -Note that OpenDrive is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". + Properties: -There are quite a few characters that can't be in OpenDrive file names. -These can't occur on Windows platforms, but on non-Windows platforms -they are common. Rclone will map these names to and from an identical -looking unicode equivalent. For example if a file has a ? in it will be -mapped to ? instead. + - Config: chunk_size + - Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE + - Type: SizeSuffix + - Default: 10Mi -rclone about is not supported by the OpenDrive backend. Backends without -this capability cannot determine free space for an rclone mount or use -policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about -Oracle Object Storage + ## Limitations -Oracle Object Storage Overview + Note that OpenDrive is case insensitive so you can't have a + file called "Hello.doc" and one called "hello.doc". -Oracle Object Storage FAQ + There are quite a few characters that can't be in OpenDrive file + names. These can't occur on Windows platforms, but on non-Windows + platforms they are common. Rclone will map these names to and from an + identical looking unicode equivalent. For example if a file has a `?` + in it will be mapped to `?` instead. -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, e.g. remote:bucket/path/to/dir. + `rclone about` is not supported by the OpenDrive backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. -Configuration + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) -Here is an example of making an oracle object storage configuration. -rclone config walks you through it. + # Oracle Object Storage + - [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) + - [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) + - [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) -Here is an example of how to make a remote called remote. First run: + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in + too, e.g. `remote:bucket/path/to/dir`. - rclone config + Sample command to transfer local artifacts to remote:bucket in oracle object storage: -This will guide you through an interactive setup process: + `rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv` - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> n + ## Configuration - Enter name for new remote. - name> remote + Here is an example of making an oracle object storage configuration. `rclone config` walks you + through it. - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - XX / Oracle Cloud Infrastructure Object Storage - \ (oracleobjectstorage) - Storage> oracleobjectstorage + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +n) New remote +o) Delete remote +p) Rename remote +q) Copy remote +r) Set configuration password +s) Quit config e/n/d/r/c/s/q> n + +Enter name for new remote. name> remote + +Option Storage. Type of storage to configure. Choose a number from +below, or type in your own value. [snip] XX / Oracle Cloud +Infrastructure Object Storage  (oracleobjectstorage) Storage> +oracleobjectstorage + +Option provider. Choose your Auth Provider Choose a number from below, +or type in your own string value. Press Enter for the default +(env_auth). 1 / automatically pickup the credentials from runtime(env), +first one to provide auth wins  (env_auth) / use an OCI user and an API +key for authentication. 2 | you’ll need to put in a config file your +tenancy OCID, user OCID, region, the path, fingerprint to an API key. | +https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + (user_principal_auth) / use instance principals to authorize an +instance to make API calls. 3 | each instance has its own identity, and +authenticates using the certificates that are read from instance +metadata. | +https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + (instance_principal_auth) 4 / use resource principals to make API calls + (resource_principal_auth) 5 / no credentials needed, this is typically +for reading public buckets  (no_auth) provider> 2 + +Option namespace. Object storage namespace Enter a value. namespace> +idbamagbg734 + +Option compartment. Object storage compartment OCID Enter a value. +compartment> +ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba + +Option region. Object storage Region Enter a value. region> us-ashburn-1 + +Option endpoint. Endpoint for Object storage API. Leave blank to use the +default endpoint for the region. Enter a value. Press Enter to leave +empty. endpoint> + +Option config_file. Full Path to OCI config file Choose a number from +below, or type in your own string value. Press Enter for the default +(~/.oci/config). 1 / oci configuration file location  (~/.oci/config) +config_file> /etc/oci/dev.conf + +Option config_profile. Profile name inside OCI config file Choose a +number from below, or type in your own string value. Press Enter for the +default (Default). 1 / Use the default profile  (Default) +config_profile> Test + +Edit advanced config? y) Yes n) No (default) y/n> n + +Configuration complete. Options: - type: oracleobjectstorage - +namespace: idbamagbg734 - compartment: +ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba +- region: us-ashburn-1 - provider: user_principal_auth - config_file: +/etc/oci/dev.conf - config_profile: Test Keep this "remote" remote? y) +Yes this is OK (default) e) Edit this remote d) Delete this remote +y/e/d> y + + + See all buckets + + rclone lsd remote: + + Create a new bucket + + rclone mkdir remote:bucket + + List the contents of a bucket + + rclone ls remote:bucket + rclone ls remote:bucket --max-depth 1 + + ## Authentication Providers + + OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication + methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) + These choices can be specified in the rclone config file. + + Rclone supports the following OCI authentication provider. + + User Principal + Instance Principal + Resource Principal + No authentication + + ### User Principal + Sample rclone config file for Authentication Provider User Principal: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = user_principal_auth + config_file = /home/opc/.oci/config + config_profile = Default + + Advantages: + - One can use this method from any server within OCI or on-premises or from other cloud provider. + + Considerations: + - you need to configure user’s privileges / policy to allow access to object storage + - Overhead of managing users and keys. + - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials. + + ### Instance Principal + An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. + With this approach no credentials have to be stored and managed. + + Sample rclone configuration file for Authentication Provider Instance Principal: + + [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf + [oos] + type = oracleobjectstorage + namespace = idfn + compartment = ocid1.compartment.oc1..aak7a + region = us-ashburn-1 + provider = instance_principal_auth + + Advantages: + + - With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute + instances or rotate the credentials. + - You don’t need to deal with users and keys. + - Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, + using kms etc. + + Considerations: + + - You need to configure a dynamic group having this instance as member and add policy to read object storage to that + dynamic group. + - Everyone who has access to this machine can execute the CLI commands. + - It is applicable for oci compute instances only. It cannot be used on external instance or resources. + + ### Resource Principal + Resource principal auth is very similar to instance principal auth but used for resources that are not + compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). + To use resource principal ensure Rclone process is started with these environment variables set in its process. + + export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 + export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 + export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem + export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token + + Sample rclone configuration file for Authentication Provider Resource Principal: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = resource_principal_auth + + ### No authentication + Public buckets do not require any authentication mechanism to read objects. + Sample rclone configuration file for No authentication: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = no_auth + + ## Options + ### Modified time + + The modified time is stored as metadata on the object as + `opc-meta-mtime` as floating point since the epoch, accurate to 1 ns. + + If the modification time needs to be updated rclone will attempt to perform a server + side copy to update the modification if the object can be copied in a single part. + In the case the object is larger than 5Gb, the object will be uploaded rather than copied. + + Note that reading this from the object takes an additional `HEAD` request as the metadata + isn't returned in object listings. + + ### Multipart uploads + + rclone supports multipart uploads with OOS which means that it can + upload files bigger than 5 GiB. + + Note that files uploaded *both* with multipart upload *and* through + crypt remotes do not have MD5 sums. + + rclone switches from single part uploads to multipart uploads at the + point specified by `--oos-upload-cutoff`. This can be a maximum of 5 GiB + and a minimum of 0 (ie always upload multipart files). + + The chunk sizes used in the multipart upload are specified by + `--oos-chunk-size` and the number of chunks uploaded concurrently is + specified by `--oos-upload-concurrency`. + + Multipart uploads will use `--transfers` * `--oos-upload-concurrency` * + `--oos-chunk-size` extra memory. Single part uploads to not use extra + memory. + + Single part transfers can be faster than multipart transfers or slower + depending on your latency from oos - the more latency, the more likely + single part transfers will be faster. + + Increasing `--oos-upload-concurrency` will increase throughput (8 would + be a sensible value) and increasing `--oos-chunk-size` also increases + throughput (16M would be sensible). Increasing either of these will + use more memory. The default values are high enough to gain most of + the possible performance without using too much memory. + + + ### Standard options + + Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). + + #### --oos-provider - Option provider. Choose your Auth Provider - Choose a number from below, or type in your own string value. - Press Enter for the default (env_auth). - 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins - \ (env_auth) - / use an OCI user and an API key for authentication. - 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - \ (user_principal_auth) - / use instance principals to authorize an instance to make API calls. - 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - \ (instance_principal_auth) - 4 / use resource principals to make API calls - \ (resource_principal_auth) - 5 / no credentials needed, this is typically for reading public buckets - \ (no_auth) - provider> 2 - Option namespace. + Properties: + + - Config: provider + - Env Var: RCLONE_OOS_PROVIDER + - Type: string + - Default: "env_auth" + - Examples: + - "env_auth" + - automatically pickup the credentials from runtime(env), first one to provide auth wins + - "user_principal_auth" + - use an OCI user and an API key for authentication. + - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + - "instance_principal_auth" + - use instance principals to authorize an instance to make API calls. + - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "resource_principal_auth" + - use resource principals to make API calls + - "no_auth" + - no credentials needed, this is typically for reading public buckets + + #### --oos-namespace + Object storage namespace - Enter a value. - namespace> idbamagbg734 - Option compartment. + Properties: + + - Config: namespace + - Env Var: RCLONE_OOS_NAMESPACE + - Type: string + - Required: true + + #### --oos-compartment + Object storage compartment OCID - Enter a value. - compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - Option region. + Properties: + + - Config: compartment + - Env Var: RCLONE_OOS_COMPARTMENT + - Provider: !no_auth + - Type: string + - Required: true + + #### --oos-region + Object storage Region - Enter a value. - region> us-ashburn-1 - Option endpoint. + Properties: + + - Config: region + - Env Var: RCLONE_OOS_REGION + - Type: string + - Required: true + + #### --oos-endpoint + Endpoint for Object storage API. + Leave blank to use the default endpoint for the region. - Enter a value. Press Enter to leave empty. - endpoint> - Option config_file. - Full Path to OCI config file - Choose a number from below, or type in your own string value. - Press Enter for the default (~/.oci/config). - 1 / oci configuration file location - \ (~/.oci/config) - config_file> /etc/oci/dev.conf + Properties: - Option config_profile. - Profile name inside OCI config file - Choose a number from below, or type in your own string value. - Press Enter for the default (Default). - 1 / Use the default profile - \ (Default) - config_profile> Test + - Config: endpoint + - Env Var: RCLONE_OOS_ENDPOINT + - Type: string + - Required: false + + #### --oos-config-file + + Path to OCI config file + + Properties: + + - Config: config_file + - Env Var: RCLONE_OOS_CONFIG_FILE + - Provider: user_principal_auth + - Type: string + - Default: "~/.oci/config" + - Examples: + - "~/.oci/config" + - oci configuration file location + + #### --oos-config-profile + + Profile name inside the oci config file + + Properties: + + - Config: config_profile + - Env Var: RCLONE_OOS_CONFIG_PROFILE + - Provider: user_principal_auth + - Type: string + - Default: "Default" + - Examples: + - "Default" + - Use the default profile + + ### Advanced options + + Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). + + #### --oos-storage-tier + + The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm + + Properties: + + - Config: storage_tier + - Env Var: RCLONE_OOS_STORAGE_TIER + - Type: string + - Default: "Standard" + - Examples: + - "Standard" + - Standard storage tier, this is the default tier + - "InfrequentAccess" + - InfrequentAccess storage tier + - "Archive" + - Archive storage tier + + #### --oos-upload-cutoff + + Cutoff for switching to chunked upload. + + Any files larger than this will be uploaded in chunks of chunk_size. + The minimum is 0 and the maximum is 5 GiB. + + Properties: + + - Config: upload_cutoff + - Env Var: RCLONE_OOS_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 200Mi + + #### --oos-chunk-size + + Chunk size to use for uploading. + + When uploading files larger than upload_cutoff or files with unknown + size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded + as multipart uploads using this chunk size. + + Note that "upload_concurrency" chunks of this size are buffered + in memory per transfer. + + If you are transferring large files over high-speed links and you have + enough memory, then increasing this will speed up the transfers. + + Rclone will automatically increase the chunk size when uploading a + large file of known size to stay below the 10,000 chunks limit. + + Files of unknown size are uploaded with the configured + chunk_size. Since the default chunk size is 5 MiB and there can be at + most 10,000 chunks, this means that by default the maximum size of + a file you can stream upload is 48 GiB. If you wish to stream upload + larger files then you will need to increase chunk_size. + + Increasing the chunk size decreases the accuracy of the progress + statistics displayed with "-P" flag. + + + Properties: + + - Config: chunk_size + - Env Var: RCLONE_OOS_CHUNK_SIZE + - Type: SizeSuffix + - Default: 5Mi + + #### --oos-max-upload-parts + + Maximum number of parts in a multipart upload. + + This option defines the maximum number of multipart chunks to use + when doing a multipart upload. + + OCI has max parts limit of 10,000 chunks. + + Rclone will automatically increase the chunk size when uploading a + large file of a known size to stay below this number of chunks limit. + + + Properties: + + - Config: max_upload_parts + - Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS + - Type: int + - Default: 10000 + + #### --oos-upload-concurrency + + Concurrency for multipart uploads. + + This is the number of chunks of the same file that are uploaded + concurrently. + + If you are uploading small numbers of large files over high-speed links + and these uploads do not fully utilize your bandwidth, then increasing + this may help to speed up the transfers. + + Properties: + + - Config: upload_concurrency + - Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY + - Type: int + - Default: 10 + + #### --oos-copy-cutoff + + Cutoff for switching to multipart copy. + + Any files larger than this that need to be server-side copied will be + copied in chunks of this size. + + The minimum is 0 and the maximum is 5 GiB. + + Properties: + + - Config: copy_cutoff + - Env Var: RCLONE_OOS_COPY_CUTOFF + - Type: SizeSuffix + - Default: 4.656Gi + + #### --oos-copy-timeout + + Timeout for copy. + + Copy is an asynchronous operation, specify timeout to wait for copy to succeed + + + Properties: + + - Config: copy_timeout + - Env Var: RCLONE_OOS_COPY_TIMEOUT + - Type: Duration + - Default: 1m0s + + #### --oos-disable-checksum + + Don't store MD5 checksum with object metadata. + + Normally rclone will calculate the MD5 checksum of the input before + uploading it so it can add it to metadata on the object. This is great + for data integrity checking but can cause long delays for large files + to start uploading. + + Properties: + + - Config: disable_checksum + - Env Var: RCLONE_OOS_DISABLE_CHECKSUM + - Type: bool + - Default: false + + #### --oos-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_OOS_ENCODING + - Type: MultiEncoder + - Default: Slash,InvalidUtf8,Dot + + #### --oos-leave-parts-on-error + + If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. + + It should be set to true for resuming uploads across different sessions. + + WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add + additional costs if not cleaned up. + + + Properties: + + - Config: leave_parts_on_error + - Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR + - Type: bool + - Default: false + + #### --oos-attempt-resume-upload + + If true attempt to resume previously started multipart upload for the object. + This will be helpful to speed up multipart transfers by resuming uploads from past session. + + WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is + aborted and a new multipart upload is started with the new chunk size. + + The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. + + + Properties: + + - Config: attempt_resume_upload + - Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD + - Type: bool + - Default: false + + #### --oos-no-check-bucket + + If set, don't attempt to check the bucket exists or create it. + + This can be useful when trying to minimise the number of transactions + rclone does if you know the bucket exists already. + + It can also be needed if the user you are using does not have bucket + creation permissions. + + + Properties: + + - Config: no_check_bucket + - Env Var: RCLONE_OOS_NO_CHECK_BUCKET + - Type: bool + - Default: false + + #### --oos-sse-customer-key-file + + To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated + with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.' + + Properties: + + - Config: sse_customer_key_file + - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE + - Type: string + - Required: false + - Examples: + - "" + - None + + #### --oos-sse-customer-key + + To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to + encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is + needed. For more information, see Using Your Own Keys for Server-Side Encryption + (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) + + Properties: + + - Config: sse_customer_key + - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY + - Type: string + - Required: false + - Examples: + - "" + - None + + #### --oos-sse-customer-key-sha256 + + If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption + key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for + Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + + Properties: + + - Config: sse_customer_key_sha256 + - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 + - Type: string + - Required: false + - Examples: + - "" + - None + + #### --oos-sse-kms-key-id + + if using your own master key in vault, this header specifies the + OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call + the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. + Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. + + Properties: + + - Config: sse_kms_key_id + - Env Var: RCLONE_OOS_SSE_KMS_KEY_ID + - Type: string + - Required: false + - Examples: + - "" + - None + + #### --oos-sse-customer-algorithm + + If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. + Object Storage supports "AES256" as the encryption algorithm. For more information, see + Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + + Properties: + + - Config: sse_customer_algorithm + - Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM + - Type: string + - Required: false + - Examples: + - "" + - None + - "AES256" + - AES256 + + ## Backend commands + + Here are the commands specific to the oracleobjectstorage backend. + + Run them with + + rclone backend COMMAND remote: + + The help below will explain what arguments each command takes. + + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. + + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). + + ### rename + + change the name of an object + + rclone backend rename remote: [options] [+] + + This command can be used to rename a object. + + Usage Examples: + + rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name + + + ### list-multipart-uploads + + List the unfinished multipart uploads + + rclone backend list-multipart-uploads remote: [options] [+] + + This command lists the unfinished multipart uploads in JSON format. + + rclone backend list-multipart-uploads oos:bucket/path/to/object + + It returns a dictionary of buckets with values as lists of unfinished + multipart uploads. + + You can call it with no bucket in which case it lists all bucket, with + a bucket or with a bucket and path. + + { + "test-bucket": [ + { + "namespace": "test-namespace", + "bucket": "test-bucket", + "object": "600m.bin", + "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", + "timeCreated": "2022-07-29T06:21:16.595Z", + "storageTier": "Standard" + } + ] + + + ### cleanup + + Remove unfinished multipart uploads. + + rclone backend cleanup remote: [options] [+] + + This command removes unfinished multipart uploads of age greater than + max-age which defaults to 24 hours. + + Note that you can use --interactive/-i or --dry-run with this command to see what + it would do. + + rclone backend cleanup oos:bucket/path/to/object + rclone backend cleanup -o max-age=7w oos:bucket/path/to/object + + Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Edit advanced config? - y) Yes - n) No (default) - y/n> n - Configuration complete. Options: - - type: oracleobjectstorage - - namespace: idbamagbg734 - - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - - region: us-ashburn-1 - - provider: user_principal_auth - - config_file: /etc/oci/dev.conf - - config_profile: Test - Keep this "remote" remote? - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -See all buckets + - "max-age": Max age of upload to delete - rclone lsd remote: -Create a new bucket - rclone mkdir remote:bucket + ## Tutorials + ### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) -List the contents of a bucket + # QingStor - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. -OCI Authentication Provider - -OCI has various authentication methods. To learn more about -authentication methods please refer oci authentication methods These -choices can be specified in the rclone config file. + ## Configuration -Rclone supports the following OCI authentication provider. + Here is an example of making an QingStor configuration. First run - User Principal - Instance Principal - Resource Principal - No authentication + rclone config -Authentication provider choice: User Principal + This will guide you through an interactive setup process. -Sample rclone config file for Authentication Provider User Principal: +No remotes found, make a new one? n) New remote r) Rename remote c) Copy +remote s) Set configuration password q) Quit config n/r/c/s/q> n name> +remote Type of storage to configure. Choose a number from below, or type +in your own value [snip] XX / QingStor Object Storage  "qingstor" [snip] +Storage> qingstor Get QingStor credentials from runtime. Only applies if +access_key_id and secret_access_key is blank. Choose a number from +below, or type in your own value 1 / Enter QingStor credentials in the +next step  "false" 2 / Get QingStor credentials from the environment +(env vars or IAM)  "true" env_auth> 1 QingStor Access Key ID - leave +blank for anonymous access or runtime credentials. access_key_id> +access_key QingStor Secret Access Key (password) - leave blank for +anonymous access or runtime credentials. secret_access_key> secret_key +Enter an endpoint URL to connection QingStor API. Leave blank will use +the default value "https://qingstor.com:443" endpoint> Zone connect to. +Default is "pek3a". Choose a number from below, or type in your own +value / The Beijing (China) Three Zone 1 | Needs location constraint +pek3a.  "pek3a" / The Shanghai (China) First Zone 2 | Needs location +constraint sh1a.  "sh1a" zone> 1 Number of connection retry. Leave blank +will use the default value "3". connection_retries> Remote config +-------------------- [remote] env_auth = false access_key_id = +access_key secret_access_key = secret_key endpoint = zone = pek3a +connection_retries = -------------------- y) Yes this is OK e) Edit this +remote d) Delete this remote y/e/d> y - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default -Advantages: - One can use this method from any server within OCI or -on-premises or from other cloud provider. + This remote is called `remote` and can now be used like this -Considerations: - you need to configure user’s privileges / policy to -allow access to object storage - Overhead of managing users and keys. - -If the user is deleted, the config file will no longer work and may -cause automation regressions that use the user's credentials. + See all buckets -Authentication provider choice: Instance Principal + rclone lsd remote: -An OCI compute instance can be authorized to use rclone by using it's -identity and certificates as an instance principal. With this approach -no credentials have to be stored and managed. + Make a new bucket -Sample rclone configuration file for Authentication Provider Instance -Principal: + rclone mkdir remote:bucket - [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = idfn - compartment = ocid1.compartment.oc1..aak7a - region = us-ashburn-1 - provider = instance_principal_auth + List the contents of a bucket -Advantages: + rclone ls remote:bucket -- With instance principals, you don't need to configure user - credentials and transfer/ save it to disk in your compute instances - or rotate the credentials. -- You don’t need to deal with users and keys. -- Greatly helps in automation as you don't have to manage access keys, - user private keys, storing them in vault, using kms etc. + Sync `/home/local/directory` to the remote bucket, deleting any excess + files in the bucket. -Considerations: + rclone sync --interactive /home/local/directory remote:bucket -- You need to configure a dynamic group having this instance as member - and add policy to read object storage to that dynamic group. -- Everyone who has access to this machine can execute the CLI - commands. -- It is applicable for oci compute instances only. It cannot be used - on external instance or resources. + ### --fast-list -Authentication provider choice: Resource Principal + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. -Resource principal auth is very similar to instance principal auth but -used for resources that are not compute instances such as serverless -functions. To use resource principal ensure Rclone process is started -with these environment variables set in its process. + ### Multipart uploads - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token + rclone supports multipart uploads with QingStor which means that it can + upload files bigger than 5 GiB. Note that files uploaded with multipart + upload don't have an MD5SUM. -Sample rclone configuration file for Authentication Provider Resource -Principal: + Note that incomplete multipart uploads older than 24 hours can be + removed with `rclone cleanup remote:bucket` just for one bucket + `rclone cleanup remote:` for all buckets. QingStor does not ever + remove incomplete multipart uploads so it may be necessary to run this + from time to time. - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = resource_principal_auth + ### Buckets and Zone -Authentication provider choice: No authentication + With QingStor you can list buckets (`rclone lsd`) using any zone, + but you can only access the content of a bucket from the zone it was + created in. If you attempt to access a bucket from the wrong zone, + you will get an error, `incorrect zone, the bucket is not in 'XXX' + zone`. -Public buckets do not require any authentication mechanism to read -objects. Sample rclone configuration file for No authentication: + ### Authentication - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = no_auth + There are two ways to supply `rclone` with a set of QingStor + credentials. In order of precedence: -Options + - Directly in the rclone configuration file (as configured by `rclone config`) + - set `access_key_id` and `secret_access_key` + - Runtime configuration: + - set `env_auth` to `true` in the config file + - Exporting the following environment variables before running `rclone` + - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` + - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` -Modified time + ### Restricted filename characters -The modified time is stored as metadata on the object as opc-meta-mtime -as floating point since the epoch, accurate to 1 ns. + The control characters 0x00-0x1F and / are replaced as in the [default + restricted characters set](https://rclone.org/overview/#restricted-characters). Note + that 0x7F is not replaced. -If the modification time needs to be updated rclone will attempt to -perform a server side copy to update the modification if the object can -be copied in a single part. In the case the object is larger than 5Gb, -the object will be uploaded rather than copied. + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Note that reading this from the object takes an additional HEAD request -as the metadata isn't returned in object listings. -Multipart uploads + ### Standard options -rclone supports multipart uploads with OOS which means that it can -upload files bigger than 5 GiB. + Here are the Standard options specific to qingstor (QingCloud Object Storage). -Note that files uploaded both with multipart upload and through crypt -remotes do not have MD5 sums. + #### --qingstor-env-auth -rclone switches from single part uploads to multipart uploads at the -point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB -and a minimum of 0 (ie always upload multipart files). + Get QingStor credentials from runtime. -The chunk sizes used in the multipart upload are specified by ---oos-chunk-size and the number of chunks uploaded concurrently is -specified by --oos-upload-concurrency. + Only applies if access_key_id and secret_access_key is blank. -Multipart uploads will use --transfers * --oos-upload-concurrency * ---oos-chunk-size extra memory. Single part uploads to not use extra -memory. + Properties: -Single part transfers can be faster than multipart transfers or slower -depending on your latency from oos - the more latency, the more likely -single part transfers will be faster. + - Config: env_auth + - Env Var: RCLONE_QINGSTOR_ENV_AUTH + - Type: bool + - Default: false + - Examples: + - "false" + - Enter QingStor credentials in the next step. + - "true" + - Get QingStor credentials from the environment (env vars or IAM). -Increasing --oos-upload-concurrency will increase throughput (8 would be -a sensible value) and increasing --oos-chunk-size also increases -throughput (16M would be sensible). Increasing either of these will use -more memory. The default values are high enough to gain most of the -possible performance without using too much memory. + #### --qingstor-access-key-id -Standard options + QingStor Access Key ID. -Here are the Standard options specific to oracleobjectstorage (Oracle -Cloud Infrastructure Object Storage). + Leave blank for anonymous access or runtime credentials. ---oos-provider + Properties: -Choose your Auth Provider + - Config: access_key_id + - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID + - Type: string + - Required: false -Properties: + #### --qingstor-secret-access-key -- Config: provider -- Env Var: RCLONE_OOS_PROVIDER -- Type: string -- Default: "env_auth" -- Examples: - - "env_auth" - - automatically pickup the credentials from runtime(env), - first one to provide auth wins - - "user_principal_auth" - - use an OCI user and an API key for authentication. - - you’ll need to put in a config file your tenancy OCID, user - OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - "instance_principal_auth" - - use instance principals to authorize an instance to make API - calls. - - each instance has its own identity, and authenticates using - the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - "resource_principal_auth" - - use resource principals to make API calls - - "no_auth" - - no credentials needed, this is typically for reading public - buckets + QingStor Secret Access Key (password). ---oos-namespace + Leave blank for anonymous access or runtime credentials. -Object storage namespace + Properties: -Properties: + - Config: secret_access_key + - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY + - Type: string + - Required: false -- Config: namespace -- Env Var: RCLONE_OOS_NAMESPACE -- Type: string -- Required: true + #### --qingstor-endpoint ---oos-compartment - -Object storage compartment OCID - -Properties: - -- Config: compartment -- Env Var: RCLONE_OOS_COMPARTMENT -- Provider: !no_auth -- Type: string -- Required: true - ---oos-region - -Object storage Region - -Properties: - -- Config: region -- Env Var: RCLONE_OOS_REGION -- Type: string -- Required: true - ---oos-endpoint - -Endpoint for Object storage API. - -Leave blank to use the default endpoint for the region. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_OOS_ENDPOINT -- Type: string -- Required: false - ---oos-config-file - -Path to OCI config file - -Properties: - -- Config: config_file -- Env Var: RCLONE_OOS_CONFIG_FILE -- Provider: user_principal_auth -- Type: string -- Default: "~/.oci/config" -- Examples: - - "~/.oci/config" - - oci configuration file location - ---oos-config-profile - -Profile name inside the oci config file - -Properties: - -- Config: config_profile -- Env Var: RCLONE_OOS_CONFIG_PROFILE -- Provider: user_principal_auth -- Type: string -- Default: "Default" -- Examples: - - "Default" - - Use the default profile - -Advanced options - -Here are the Advanced options specific to oracleobjectstorage (Oracle -Cloud Infrastructure Object Storage). - ---oos-storage-tier - -The storage class to use when storing new objects in storage. -https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm - -Properties: - -- Config: storage_tier -- Env Var: RCLONE_OOS_STORAGE_TIER -- Type: string -- Default: "Standard" -- Examples: - - "Standard" - - Standard storage tier, this is the default tier - - "InfrequentAccess" - - InfrequentAccess storage tier - - "Archive" - - Archive storage tier - ---oos-upload-cutoff - -Cutoff for switching to chunked upload. - -Any files larger than this will be uploaded in chunks of chunk_size. The -minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_OOS_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - ---oos-chunk-size - -Chunk size to use for uploading. - -When uploading files larger than upload_cutoff or files with unknown -size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google -photos or google docs) they will be uploaded as multipart uploads using -this chunk size. - -Note that "upload_concurrency" chunks of this size are buffered in -memory per transfer. - -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. - -Rclone will automatically increase the chunk size when uploading a large -file of known size to stay below the 10,000 chunks limit. - -Files of unknown size are uploaded with the configured chunk_size. Since -the default chunk size is 5 MiB and there can be at most 10,000 chunks, -this means that by default the maximum size of a file you can stream -upload is 48 GiB. If you wish to stream upload larger files then you -will need to increase chunk_size. - -Increasing the chunk size decreases the accuracy of the progress -statistics displayed with "-P" flag. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_OOS_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Mi - ---oos-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY -- Type: int -- Default: 10 - ---oos-copy-cutoff - -Cutoff for switching to multipart copy. - -Any files larger than this that need to be server-side copied will be -copied in chunks of this size. - -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: copy_cutoff -- Env Var: RCLONE_OOS_COPY_CUTOFF -- Type: SizeSuffix -- Default: 4.656Gi - ---oos-copy-timeout - -Timeout for copy. - -Copy is an asynchronous operation, specify timeout to wait for copy to -succeed - -Properties: - -- Config: copy_timeout -- Env Var: RCLONE_OOS_COPY_TIMEOUT -- Type: Duration -- Default: 1m0s - ---oos-disable-checksum - -Don't store MD5 checksum with object metadata. - -Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. This is great -for data integrity checking but can cause long delays for large files to -start uploading. - -Properties: - -- Config: disable_checksum -- Env Var: RCLONE_OOS_DISABLE_CHECKSUM -- Type: bool -- Default: false - ---oos-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_OOS_ENCODING -- Type: MultiEncoder -- Default: Slash,InvalidUtf8,Dot - ---oos-leave-parts-on-error - -If true avoid calling abort upload on a failure, leaving all -successfully uploaded parts on S3 for manual recovery. - -It should be set to true for resuming uploads across different sessions. - -WARNING: Storing parts of an incomplete multipart upload counts towards -space usage on object storage and will add additional costs if not -cleaned up. - -Properties: - -- Config: leave_parts_on_error -- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false - ---oos-no-check-bucket - -If set, don't attempt to check the bucket exists or create it. - -This can be useful when trying to minimise the number of transactions -rclone does if you know the bucket exists already. - -It can also be needed if the user you are using does not have bucket -creation permissions. - -Properties: - -- Config: no_check_bucket -- Env Var: RCLONE_OOS_NO_CHECK_BUCKET -- Type: bool -- Default: false - ---oos-sse-customer-key-file - -To use SSE-C, a file containing the base64-encoded string of the AES-256 -encryption key associated with the object. Please note only one of -sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.' - -Properties: - -- Config: sse_customer_key_file -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE -- Type: string -- Required: false -- Examples: - - "" - - None - ---oos-sse-customer-key - -To use SSE-C, the optional header that specifies the base64-encoded -256-bit encryption key to use to encrypt or decrypt the data. Please -note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id -is needed. For more information, see Using Your Own Keys for Server-Side -Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) - -Properties: - -- Config: sse_customer_key -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY -- Type: string -- Required: false -- Examples: - - "" - - None - ---oos-sse-customer-key-sha256 - -If using SSE-C, The optional header that specifies the base64-encoded -SHA256 hash of the encryption key. This value is used to check the -integrity of the encryption key. see Using Your Own Keys for Server-Side -Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_key_sha256 -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 -- Type: string -- Required: false -- Examples: - - "" - - None - ---oos-sse-kms-key-id - -if using your own master key in vault, this header specifies the OCID -(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) -of a master encryption key used to call the Key Management service to -generate a data encryption key or to encrypt or decrypt a data -encryption key. Please note only one of -sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. - -Properties: - -- Config: sse_kms_key_id -- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID -- Type: string -- Required: false -- Examples: - - "" - - None - ---oos-sse-customer-algorithm - -If using SSE-C, the optional header that specifies "AES256" as the -encryption algorithm. Object Storage supports "AES256" as the encryption -algorithm. For more information, see Using Your Own Keys for Server-Side -Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_algorithm -- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM -- Type: string -- Required: false -- Examples: - - "" - - None - - "AES256" - - AES256 - -Backend commands - -Here are the commands specific to the oracleobjectstorage backend. - -Run them with - - rclone backend COMMAND remote: - -The help below will explain what arguments each command takes. - -See the backend command for more info on how to pass options and -arguments. - -These can be run on a running backend using the rc command -backend/command. - -rename - -change the name of an object - - rclone backend rename remote: [options] [+] - -This command can be used to rename a object. - -Usage Examples: - - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name - -list-multipart-uploads - -List the unfinished multipart uploads - - rclone backend list-multipart-uploads remote: [options] [+] - -This command lists the unfinished multipart uploads in JSON format. - - rclone backend list-multipart-uploads oos:bucket/path/to/object - -It returns a dictionary of buckets with values as lists of unfinished -multipart uploads. - -You can call it with no bucket in which case it lists all bucket, with a -bucket or with a bucket and path. - - { - "test-bucket": [ - { - "namespace": "test-namespace", - "bucket": "test-bucket", - "object": "600m.bin", - "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", - "timeCreated": "2022-07-29T06:21:16.595Z", - "storageTier": "Standard" - } - ] - -cleanup - -Remove unfinished multipart uploads. - - rclone backend cleanup remote: [options] [+] - -This command removes unfinished multipart uploads of age greater than -max-age which defaults to 24 hours. - -Note that you can use --interactive/-i or --dry-run with this command to -see what it would do. - - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object - -Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - -Options: - -- "max-age": Max age of upload to delete - -QingStor - -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, e.g. remote:bucket/path/to/dir. - -Configuration - -Here is an example of making an QingStor configuration. First run - - rclone config - -This will guide you through an interactive setup process. - - No remotes found, make a new one? - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / QingStor Object Storage - \ "qingstor" - [snip] - Storage> qingstor - Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - Choose a number from below, or type in your own value - 1 / Enter QingStor credentials in the next step - \ "false" - 2 / Get QingStor credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 - QingStor Access Key ID - leave blank for anonymous access or runtime credentials. - access_key_id> access_key - QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. - secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. - Leave blank will use the default value "https://qingstor.com:443" - endpoint> - Zone connect to. Default is "pek3a". - Choose a number from below, or type in your own value - / The Beijing (China) Three Zone - 1 | Needs location constraint pek3a. - \ "pek3a" - / The Shanghai (China) First Zone - 2 | Needs location constraint sh1a. - \ "sh1a" - zone> 1 - Number of connection retry. - Leave blank will use the default value "3". - connection_retries> - Remote config - -------------------- - [remote] - env_auth = false - access_key_id = access_key - secret_access_key = secret_key - endpoint = - zone = pek3a - connection_retries = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y -This remote is called remote and can now be used like this + Leave blank will use the default value "https://qingstor.com:443". -See all buckets + Properties: - rclone lsd remote: + - Config: endpoint + - Env Var: RCLONE_QINGSTOR_ENDPOINT + - Type: string + - Required: false -Make a new bucket + #### --qingstor-zone - rclone mkdir remote:bucket + Zone to connect to. -List the contents of a bucket + Default is "pek3a". - rclone ls remote:bucket + Properties: -Sync /home/local/directory to the remote bucket, deleting any excess -files in the bucket. + - Config: zone + - Env Var: RCLONE_QINGSTOR_ZONE + - Type: string + - Required: false + - Examples: + - "pek3a" + - The Beijing (China) Three Zone. + - Needs location constraint pek3a. + - "sh1a" + - The Shanghai (China) First Zone. + - Needs location constraint sh1a. + - "gd2a" + - The Guangdong (China) Second Zone. + - Needs location constraint gd2a. - rclone sync --interactive /home/local/directory remote:bucket + ### Advanced options ---fast-list + Here are the Advanced options specific to qingstor (QingCloud Object Storage). -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. + #### --qingstor-connection-retries -Multipart uploads + Number of connection retries. -rclone supports multipart uploads with QingStor which means that it can -upload files bigger than 5 GiB. Note that files uploaded with multipart -upload don't have an MD5SUM. + Properties: -Note that incomplete multipart uploads older than 24 hours can be -removed with rclone cleanup remote:bucket just for one bucket -rclone cleanup remote: for all buckets. QingStor does not ever remove -incomplete multipart uploads so it may be necessary to run this from -time to time. + - Config: connection_retries + - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES + - Type: int + - Default: 3 -Buckets and Zone + #### --qingstor-upload-cutoff -With QingStor you can list buckets (rclone lsd) using any zone, but you -can only access the content of a bucket from the zone it was created in. -If you attempt to access a bucket from the wrong zone, you will get an -error, incorrect zone, the bucket is not in 'XXX' zone. + Cutoff for switching to chunked upload. -Authentication + Any files larger than this will be uploaded in chunks of chunk_size. + The minimum is 0 and the maximum is 5 GiB. -There are two ways to supply rclone with a set of QingStor credentials. -In order of precedence: + Properties: -- Directly in the rclone configuration file (as configured by - rclone config) - - set access_key_id and secret_access_key -- Runtime configuration: - - set env_auth to true in the config file - - Exporting the following environment variables before running - rclone - - Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY - - Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY + - Config: upload_cutoff + - Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF + - Type: SizeSuffix + - Default: 200Mi -Restricted filename characters + #### --qingstor-chunk-size -The control characters 0x00-0x1F and / are replaced as in the default -restricted characters set. Note that 0x7F is not replaced. + Chunk size to use for uploading. -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + When uploading files larger than upload_cutoff they will be uploaded + as multipart uploads using this chunk size. -Standard options + Note that "--qingstor-upload-concurrency" chunks of this size are buffered + in memory per transfer. -Here are the Standard options specific to qingstor (QingCloud Object -Storage). + If you are transferring large files over high-speed links and you have + enough memory, then increasing this will speed up the transfers. ---qingstor-env-auth + Properties: -Get QingStor credentials from runtime. + - Config: chunk_size + - Env Var: RCLONE_QINGSTOR_CHUNK_SIZE + - Type: SizeSuffix + - Default: 4Mi -Only applies if access_key_id and secret_access_key is blank. + #### --qingstor-upload-concurrency -Properties: + Concurrency for multipart uploads. -- Config: env_auth -- Env Var: RCLONE_QINGSTOR_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - "false" - - Enter QingStor credentials in the next step. - - "true" - - Get QingStor credentials from the environment (env vars or - IAM). + This is the number of chunks of the same file that are uploaded + concurrently. ---qingstor-access-key-id + NB if you set this to > 1 then the checksums of multipart uploads + become corrupted (the uploads themselves are not corrupted though). -QingStor Access Key ID. + If you are uploading small numbers of large files over high-speed links + and these uploads do not fully utilize your bandwidth, then increasing + this may help to speed up the transfers. -Leave blank for anonymous access or runtime credentials. + Properties: -Properties: + - Config: upload_concurrency + - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY + - Type: int + - Default: 1 -- Config: access_key_id -- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID -- Type: string -- Required: false + #### --qingstor-encoding ---qingstor-secret-access-key + The encoding for the backend. -QingStor Secret Access Key (password). + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Leave blank for anonymous access or runtime credentials. + Properties: -Properties: + - Config: encoding + - Env Var: RCLONE_QINGSTOR_ENCODING + - Type: MultiEncoder + - Default: Slash,Ctl,InvalidUtf8 -- Config: secret_access_key -- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY -- Type: string -- Required: false ---qingstor-endpoint -Enter an endpoint URL to connection QingStor API. + ## Limitations -Leave blank will use the default value "https://qingstor.com:443". + `rclone about` is not supported by the qingstor backend. Backends without + this capability cannot determine free space for an rclone mount or + use policy `mfs` (most free space) as a member of an rclone union + remote. -Properties: + See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) -- Config: endpoint -- Env Var: RCLONE_QINGSTOR_ENDPOINT -- Type: string -- Required: false + # Quatrix ---qingstor-zone + Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). -Zone to connect to. + Paths are specified as `remote:path` -Default is "pek3a". + Paths may be as deep as required, e.g., `remote:directory/subdirectory`. -Properties: + The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https:///profile/api-keys` + or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. -- Config: zone -- Env Var: RCLONE_QINGSTOR_ZONE -- Type: string -- Required: false -- Examples: - - "pek3a" - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - "sh1a" - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - "gd2a" - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. + See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer -Advanced options + ## Configuration -Here are the Advanced options specific to qingstor (QingCloud Object -Storage). + Here is an example of how to make a remote called `remote`. First run: ---qingstor-connection-retries + rclone config -Number of connection retries. + This will guide you through an interactive setup process: -Properties: +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Quatrix by Maytech  "quatrix" [snip] Storage> quatrix API key for +accessing Quatrix account. api_key> your_api_key Host name of Quatrix +account. host> example.quatrix.it -- Config: connection_retries -- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES -- Type: int -- Default: 3 + -------------------- + [remote] api_key = + your_api_key host = + example.quatrix.it + -------------------- + y) Yes this is OK e) + Edit this remote d) + Delete this remote + y/e/d> y ``` ---qingstor-upload-cutoff + Once configured you + can then use rclone + like this, -Cutoff for switching to chunked upload. + List directories in + top level of your + Quatrix -Any files larger than this will be uploaded in chunks of chunk_size. The -minimum is 0 and the maximum is 5 GiB. + rclone lsd remote: -Properties: + List all the files + in your Quatrix -- Config: upload_cutoff -- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi + rclone ls remote: ---qingstor-chunk-size + To copy a local + directory to an + Quatrix directory + called backup + + rclone copy + /home/source + remote:backup + + ### API key validity + + API Key is created + with no expiration + date. It will be + valid until you + delete or deactivate + it in your account. + After disabling, the + API Key can be + enabled back. If the + API Key was deleted + and a new key was + created, you can + update it in rclone + config. The same + happens if the + hostname was + changed. + + ``` $ rclone config + Current remotes: + + Name Type ==== ==== + remote quatrix + + e) Edit existing + remote n) New remote + d) Delete remote r) + Rename remote c) + Copy remote s) Set + configuration + password q) Quit + config + e/n/d/r/c/s/q> e + Choose a number from + below, or type in an + existing value 1 > + remote remote> + remote + -------------------- + +[remote] type = quatrix host = some_host.quatrix.it api_key = +your_api_key -------------------- Edit remote Option api_key. API key +for accessing Quatrix account Enter a string value. Press Enter for the +default (your_api_key) api_key> Option host. Host name of Quatrix +account Enter a string value. Press Enter for the default +(some_host.quatrix.it). + + -------------------------------------------------- + [remote] type = quatrix host = + some_host.quatrix.it api_key = your_api_key + -------------------------------------------------- + y) Yes this is OK e) Edit this remote d) Delete + this remote y/e/d> y ``` + + ### Modified time and hashes + + Quatrix allows modification times to be set on + objects accurate to 1 microsecond. These will be + used to detect whether objects need syncing or + not. + + Quatrix does not support hashes, so you cannot use + the --checksum flag. + + ### Restricted filename characters + + File names in Quatrix are case sensitive and have + limitations like the maximum length of a filename + is 255, and the minimum length is 1. A file name + cannot be equal to . or .. nor contain / , \ or + non-printable ascii. -Chunk size to use for uploading. + ### Transfers -When uploading files larger than upload_cutoff they will be uploaded as -multipart uploads using this chunk size. + For files above 50 MiB rclone will use a chunked + transfer. Rclone will upload up to --transfers + chunks at the same time (shared among all + multipart uploads). Chunks are buffered in memory, + and the minimal chunk size is 10_000_000 bytes by + default, and it can be changed in the advanced + configuration, so increasing --transfers will + increase the memory use. The chunk size has a + maximum size limit, which is set to 100_000_000 + bytes by default and can be changed in the + advanced configuration. The size of the uploaded + chunk will dynamically change depending on the + upload speed. The total memory use equals the + number of transfers multiplied by the minimal + chunk size. In case there's free memory allocated + for the upload (which equals the difference of + maximal_summary_chunk_size and minimal_chunk_size + * transfers), the chunk size may increase in case + of high upload speed. As well as it can decrease + in case of upload speed problems. If no free + memory is available, all chunks will equal + minimal_chunk_size. -Note that "--qingstor-upload-concurrency" chunks of this size are -buffered in memory per transfer. + ### Deleting files -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. + Files you delete with rclone will end up in Trash + and be stored there for 30 days. Quatrix also + provides an API to permanently delete files and an + API to empty the Trash so that you can remove + files permanently from your account. -Properties: + ### Standard options -- Config: chunk_size -- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE -- Type: SizeSuffix -- Default: 4Mi + Here are the Standard options specific to quatrix + (Quatrix by Maytech). ---qingstor-upload-concurrency + #### --quatrix-api-key -Concurrency for multipart uploads. + API key for accessing Quatrix account -This is the number of chunks of the same file that are uploaded -concurrently. + Properties: -NB if you set this to > 1 then the checksums of multipart uploads become -corrupted (the uploads themselves are not corrupted though). + - Config: api_key - Env Var: + RCLONE_QUATRIX_API_KEY - Type: string - Required: + true -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. + #### --quatrix-host -Properties: + Host name of Quatrix account -- Config: upload_concurrency -- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY -- Type: int -- Default: 1 + Properties: ---qingstor-encoding + - Config: host - Env Var: RCLONE_QUATRIX_HOST - + Type: string - Required: true -The encoding for the backend. + ### Advanced options -See the encoding section in the overview for more info. + Here are the Advanced options specific to quatrix + (Quatrix by Maytech). -Properties: + #### --quatrix-encoding -- Config: encoding -- Env Var: RCLONE_QINGSTOR_ENCODING -- Type: MultiEncoder -- Default: Slash,Ctl,InvalidUtf8 + The encoding for the backend. -Limitations + See the encoding section in the overview for more + info. -rclone about is not supported by the qingstor backend. Backends without -this capability cannot determine free space for an rclone mount or use -policy mfs (most free space) as a member of an rclone union remote. + Properties: -See List of backends that do not support rclone about and rclone about + - Config: encoding - Env Var: + RCLONE_QUATRIX_ENCODING - Type: MultiEncoder - + Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -Sia + #### --quatrix-effective-upload-time -Sia (sia.tech) is a decentralized cloud storage platform based on the -blockchain technology. With rclone you can use it like any other remote -filesystem or mount Sia folders locally. The technology behind it -involves a number of new concepts such as Siacoins and Wallet, -Blockchain and Consensus, Renting and Hosting, and so on. If you are new -to it, you'd better first familiarize yourself using their excellent -support documentation. + Wanted upload time for one chunk -Introduction + Properties: -Before you can use rclone with Sia, you will need to have a running copy -of Sia-UI or siad (the Sia daemon) locally on your computer or on local -network (e.g. a NAS). Please follow the Get started guide and install -one. + - Config: effective_upload_time - Env Var: + RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: + string - Default: "4s" -rclone interacts with Sia network by talking to the Sia daemon via HTTP -API which is usually available on port 9980. By default you will run the -daemon locally on the same computer so it's safe to leave the API -password blank (the API URL will be http://127.0.0.1:9980 making -external access impossible). + #### --quatrix-minimal-chunk-size -However, if you want to access Sia daemon running on another node, for -example due to memory constraints or because you want to share single -daemon between several rclone and Sia-UI instances, you'll need to make -a few more provisions: - Ensure you have Sia daemon installed directly -or in a docker container because Sia-UI does not support this mode -natively. - Run it on externally accessible port, for example provide ---api-addr :9980 and --disable-api-security arguments on the daemon -command line. - Enforce API password for the siad daemon via environment -variable SIA_API_PASSWORD or text file named apipassword in the daemon -directory. - Set rclone backend option api_password taking it from above -locations. + The minimal size for one chunk + + Properties: + + - Config: minimal_chunk_size - Env Var: + RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: + SizeSuffix - Default: 9.537Mi + + #### --quatrix-maximal-summary-chunk-size + + The maximal summary for all chunks. It should not + be less than 'transfers'*'minimal_chunk_size' + + Properties: + + - Config: maximal_summary_chunk_size - Env Var: + RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: + SizeSuffix - Default: 95.367Mi + + #### --quatrix-hard-delete + + Delete files permanently rather than putting them + into the trash. + + Properties: + + - Config: hard_delete - Env Var: + RCLONE_QUATRIX_HARD_DELETE - Type: bool - Default: + false + + ## Storage usage + + The storage usage in Quatrix is restricted to the + account during the purchase. You can restrict any + user with a smaller storage limit. The account + limit is applied if the user has no custom storage + limit. Once you've reached the limit, the upload + of files will fail. This can be fixed by freeing + up the space or increasing the quota. + + ## Server-side operations + + Quatrix supports server-side operations (copy and + move). In case of conflict, files are overwritten + during server-side operation. + + # Sia + + Sia (sia.tech) is a decentralized cloud storage + platform based on the blockchain technology. With + rclone you can use it like any other remote + filesystem or mount Sia folders locally. The + technology behind it involves a number of new + concepts such as Siacoins and Wallet, Blockchain + and Consensus, Renting and Hosting, and so on. If + you are new to it, you'd better first familiarize + yourself using their excellent support + documentation. + + ## Introduction + + Before you can use rclone with Sia, you will need + to have a running copy of Sia-UI or siad (the Sia + daemon) locally on your computer or on local + network (e.g. a NAS). Please follow the Get + started guide and install one. + + rclone interacts with Sia network by talking to + the Sia daemon via HTTP API which is usually + available on port 9980. By default you will run + the daemon locally on the same computer so it's + safe to leave the API password blank (the API URL + will be http://127.0.0.1:9980 making external + access impossible). + + However, if you want to access Sia daemon running + on another node, for example due to memory + constraints or because you want to share single + daemon between several rclone and Sia-UI + instances, you'll need to make a few more + provisions: - Ensure you have Sia daemon installed + directly or in a docker container because Sia-UI + does not support this mode natively. - Run it on + externally accessible port, for example provide + --api-addr :9980 and --disable-api-security + arguments on the daemon command line. - Enforce + API password for the siad daemon via environment + variable SIA_API_PASSWORD or text file named + apipassword in the daemon directory. - Set rclone + backend option api_password taking it from above + locations. + + Notes: 1. If your wallet is locked, rclone cannot + unlock it automatically. You should either unlock + it in advance by using Sia-UI or via command line + siac wallet unlock. Alternatively you can make + siad unlock your wallet automatically upon startup + by running it with environment variable + SIA_WALLET_PASSWORD. 2. If siad cannot find the + SIA_API_PASSWORD variable or the apipassword file + in the SIA_DIR directory, it will generate a + random password and store in the text file named + apipassword under YOUR_HOME/.sia/ directory on + Unix or + C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword + on Windows. Remember this when you configure + password in rclone. 3. The only way to use siad + without API password is to run it on localhost + with command line argument --authorize-api=false, + but this is insecure and strongly discouraged. + + ## Configuration + + Here is an example of how to make a sia remote + called mySia. First, run: + + rclone config + + This will guide you through an interactive setup + process: + + ``` No remotes found, make a new one? n) New + remote s) Set configuration password q) Quit + config n/s/q> n name> mySia Type of storage to + configure. Enter a string value. Press Enter for + the default (""). Choose a number from below, or + type in your own value ... 29 / Sia Decentralized + Cloud  "sia" ... Storage> sia Sia daemon API URL, + like http://sia.daemon.host:9980. Note that siad + must run with --disable-api-security to open API + port for other hosts (not recommended). Keep + default if Sia daemon runs on localhost. Enter a + string value. Press Enter for the default + ("http://127.0.0.1:9980"). api_url> + http://127.0.0.1:9980 Sia Daemon API Password. Can + be found in the apipassword file located in + HOME/.sia/ or in the daemon directory. y) Yes type + in my own password g) Generate random password n) + No leave this optional password blank (default) + y/g/n> y Enter the password: password: Confirm the + password: password: Edit advanced config? y) Yes + n) No (default) y/n> n + -------------------------------------------------- + +[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** +ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit +this remote d) Delete this remote y/e/d> y + + + Once configured, you can then use `rclone` like this: + + - List directories in top level of your Sia storage + +rclone lsd mySia: + + + - List all the files in your Sia storage + +rclone ls mySia: + + + - Upload a local directory to the Sia directory called _backup_ + +rclone copy /home/source mySia:backup + + + + ### Standard options + + Here are the Standard options specific to sia (Sia Decentralized Cloud). + + #### --sia-api-url -Notes: 1. If your wallet is locked, rclone cannot unlock it -automatically. You should either unlock it in advance by using Sia-UI or -via command line siac wallet unlock. Alternatively you can make siad -unlock your wallet automatically upon startup by running it with -environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the -SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR -directory, it will generate a random password and store in the text file -named apipassword under YOUR_HOME/.sia/ directory on Unix or -C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember -this when you configure password in rclone. 3. The only way to use siad -without API password is to run it on localhost with command line -argument --authorize-api=false, but this is insecure and strongly -discouraged. - -Configuration - -Here is an example of how to make a sia remote called mySia. First, run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> mySia - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - ... - 29 / Sia Decentralized Cloud - \ "sia" - ... - Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. + Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. - Enter a string value. Press Enter for the default ("http://127.0.0.1:9980"). - api_url> http://127.0.0.1:9980 + + Properties: + + - Config: api_url + - Env Var: RCLONE_SIA_API_URL + - Type: string + - Default: "http://127.0.0.1:9980" + + #### --sia-api-password + Sia Daemon API Password. + Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g/n> y - Enter the password: - password: - Confirm the password: - password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n - -------------------- - [mySia] - type = sia - api_url = http://127.0.0.1:9980 - api_password = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y -Once configured, you can then use rclone like this: + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -- List directories in top level of your Sia storage + Properties: - rclone lsd mySia: + - Config: api_password + - Env Var: RCLONE_SIA_API_PASSWORD + - Type: string + - Required: false -- List all the files in your Sia storage + ### Advanced options - rclone ls mySia: + Here are the Advanced options specific to sia (Sia Decentralized Cloud). -- Upload a local directory to the Sia directory called backup + #### --sia-user-agent - rclone copy /home/source mySia:backup + Siad User Agent -Standard options + Sia daemon requires the 'Sia-Agent' user agent by default for security -Here are the Standard options specific to sia (Sia Decentralized Cloud). + Properties: ---sia-api-url + - Config: user_agent + - Env Var: RCLONE_SIA_USER_AGENT + - Type: string + - Default: "Sia-Agent" -Sia daemon API URL, like http://sia.daemon.host:9980. + #### --sia-encoding -Note that siad must run with --disable-api-security to open API port for -other hosts (not recommended). Keep default if Sia daemon runs on -localhost. + The encoding for the backend. -Properties: + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -- Config: api_url -- Env Var: RCLONE_SIA_API_URL -- Type: string -- Default: "http://127.0.0.1:9980" + Properties: ---sia-api-password + - Config: encoding + - Env Var: RCLONE_SIA_ENCODING + - Type: MultiEncoder + - Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot -Sia Daemon API Password. -Can be found in the apipassword file located in HOME/.sia/ or in the -daemon directory. -NB Input to this must be obscured - see rclone obscure. + ## Limitations -Properties: + - Modification times not supported + - Checksums not supported + - `rclone about` not supported + - rclone can work only with _Siad_ or _Sia-UI_ at the moment, + the **SkyNet daemon is not supported yet.** + - Sia does not allow control characters or symbols like question and pound + signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) + them for you, but you'd better be aware -- Config: api_password -- Env Var: RCLONE_SIA_API_PASSWORD -- Type: string -- Required: false + # Swift -Advanced options + Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). + Commercial implementations of that being: -Here are the Advanced options specific to sia (Sia Decentralized Cloud). + * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) + * [Memset Memstore](https://www.memset.com/cloud/storage/) + * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) + * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) + * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) + * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) ---sia-user-agent + Paths are specified as `remote:container` (or `remote:` for the `lsd` + command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. -Siad User Agent + ## Configuration -Sia daemon requires the 'Sia-Agent' user agent by default for security + Here is an example of making a swift configuration. First run -Properties: + rclone config -- Config: user_agent -- Env Var: RCLONE_SIA_USER_AGENT -- Type: string -- Default: "Sia-Agent" - ---sia-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_SIA_ENCODING -- Type: MultiEncoder -- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot - -Limitations - -- Modification times not supported -- Checksums not supported -- rclone about not supported -- rclone can work only with Siad or Sia-UI at the moment, the SkyNet - daemon is not supported yet. -- Sia does not allow control characters or symbols like question and - pound signs in file names. rclone will transparently encode them for - you, but you'd better be aware - -Swift - -Swift refers to OpenStack Object Storage. Commercial implementations of -that being: - -- Rackspace Cloud Files -- Memset Memstore -- OVH Object Storage -- Oracle Cloud Storage -- Blomp Cloud Storage -- IBM Bluemix Cloud ObjectStorage Swift - -Paths are specified as remote:container (or remote: for the lsd -command.) You may put subdirectories in too, e.g. -remote:container/path/to/dir. - -Configuration - -Here is an example of making a swift configuration. First run - - rclone config - -This will guide you through an interactive setup process. - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) - \ "swift" - [snip] - Storage> swift - Get swift credentials from environment variables in standard OpenStack form. - Choose a number from below, or type in your own value - 1 / Enter swift credentials in the next step - \ "false" - 2 / Get swift credentials from environment vars. Leave other fields blank if using this. - \ "true" - env_auth> true - User name to log in (OS_USERNAME). - user> - API key or password (OS_PASSWORD). - key> - Authentication URL for server (OS_AUTH_URL). - Choose a number from below, or type in your own value - 1 / Rackspace US - \ "https://auth.api.rackspacecloud.com/v1.0" - 2 / Rackspace UK - \ "https://lon.auth.api.rackspacecloud.com/v1.0" - 3 / Rackspace v2 - \ "https://identity.api.rackspacecloud.com/v2.0" - 4 / Memset Memstore UK - \ "https://auth.storage.memset.com/v1.0" - 5 / Memset Memstore UK v2 - \ "https://auth.storage.memset.com/v2.0" - 6 / OVH - \ "https://auth.cloud.ovh.net/v3" - 7 / Blomp Cloud Storage - \ "https://authenticate.ain.net" - auth> - User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - user_id> - User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - domain> - Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - tenant> - Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - tenant_id> - Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - tenant_domain> - Region name - optional (OS_REGION_NAME) - region> - Storage URL - optional (OS_STORAGE_URL) - storage_url> - Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - auth_token> - AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - auth_version> - Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) - Choose a number from below, or type in your own value - 1 / Public (default, choose this if not sure) - \ "public" - 2 / Internal (use internal service net) - \ "internal" - 3 / Admin - \ "admin" - endpoint_type> - Remote config - -------------------- - [test] - env_auth = true - user = - key = - auth = - user_id = - domain = - tenant = - tenant_id = - tenant_domain = - region = - storage_url = - auth_token = - auth_version = - endpoint_type = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all containers - - rclone lsd remote: - -Make a new container - - rclone mkdir remote:container - -List the contents of a container - - rclone ls remote:container - -Sync /home/local/directory to the remote container, deleting any excess -files in the container. - - rclone sync --interactive /home/local/directory remote:container - -Configuration from an OpenStack credentials file - -An OpenStack credentials file typically looks something something like -this (without the comments) - - export OS_AUTH_URL=https://a.provider.net/v2.0 - export OS_TENANT_ID=ffffffffffffffffffffffffffffffff - export OS_TENANT_NAME="1234567890123456" - export OS_USERNAME="123abc567xy" - echo "Please enter your OpenStack Password: " - read -sr OS_PASSWORD_INPUT - export OS_PASSWORD=$OS_PASSWORD_INPUT - export OS_REGION_NAME="SBG1" - if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi - -The config file needs to look something like this where $OS_USERNAME -represents the value of the OS_USERNAME variable - 123abc567xy in the -example above. - - [remote] - type = swift - user = $OS_USERNAME - key = $OS_PASSWORD - auth = $OS_AUTH_URL - tenant = $OS_TENANT_NAME - -Note that you may (or may not) need to set region too - try without -first. - -Configuration from the environment - -If you prefer you can configure rclone to use swift using a standard set -of OpenStack environment variables. - -When you run through the config, make sure you choose true for env_auth -and leave everything else blank. - -rclone will then set any empty config parameters from the environment -using standard OpenStack environment variables. There is a list of the -variables in the docs for the swift library. - -Using an alternate authentication method - -If your OpenStack installation uses a non-standard authentication method -that might not be yet supported by rclone or the underlying swift -library, you can authenticate externally (e.g. calling manually the -openstack commands to get a token). Then, you just need to pass the two -configuration variables auth_token and storage_url. If they are both -provided, the other variables are ignored. rclone will not try to -authenticate but instead assume it is already authenticated and use -these two variables to access the OpenStack installation. - -Using rclone without a config file - -You can use rclone with swift without a config file, if desired, like -this: - - source openstack-credentials-file - export RCLONE_CONFIG_MYREMOTE_TYPE=swift - export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true - rclone lsd myremote: - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - ---update and --use-server-modtime - -As noted below, the modified time is stored on metadata on the object. -It is used by default for all operations that require checking the time -a file was last updated. It allows rclone to treat the remote more like -a true filesystem, but it is inefficient because it requires an extra -API call to retrieve the metadata. - -For many operations, the time the object was last uploaded to the remote -is sufficient to determine if it is "dirty". By using --update along -with --use-server-modtime, you can avoid the extra API call and simply -upload files whose local modtime is newer than the time it was last -uploaded. - -Modified time - -The modified time is stored as metadata on the object as -X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. - -This is a de facto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -Restricted filename characters - - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - / 0x2F / - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Standard options - -Here are the Standard options specific to swift (OpenStack Swift -(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - ---swift-env-auth - -Get swift credentials from environment variables in standard OpenStack -form. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_SWIFT_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - "false" - - Enter swift credentials in the next step. - - "true" - - Get swift credentials from environment vars. - - Leave other fields blank if using this. - ---swift-user - -User name to log in (OS_USERNAME). - -Properties: - -- Config: user -- Env Var: RCLONE_SWIFT_USER -- Type: string -- Required: false - ---swift-key - -API key or password (OS_PASSWORD). - -Properties: - -- Config: key -- Env Var: RCLONE_SWIFT_KEY -- Type: string -- Required: false - ---swift-auth - -Authentication URL for server (OS_AUTH_URL). - -Properties: - -- Config: auth -- Env Var: RCLONE_SWIFT_AUTH -- Type: string -- Required: false -- Examples: - - "https://auth.api.rackspacecloud.com/v1.0" - - Rackspace US - - "https://lon.auth.api.rackspacecloud.com/v1.0" - - Rackspace UK - - "https://identity.api.rackspacecloud.com/v2.0" - - Rackspace v2 - - "https://auth.storage.memset.com/v1.0" - - Memset Memstore UK - - "https://auth.storage.memset.com/v2.0" - - Memset Memstore UK v2 - - "https://auth.cloud.ovh.net/v3" - - OVH - - "https://authenticate.ain.net" - - Blomp Cloud Storage - ---swift-user-id - -User ID to log in - optional - most swift systems use user and leave -this blank (v3 auth) (OS_USER_ID). - -Properties: - -- Config: user_id -- Env Var: RCLONE_SWIFT_USER_ID -- Type: string -- Required: false - ---swift-domain - -User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - -Properties: - -- Config: domain -- Env Var: RCLONE_SWIFT_DOMAIN -- Type: string -- Required: false - ---swift-tenant + This will guide you through an interactive setup process. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset +Memstore, OVH)  "swift" [snip] Storage> swift Get swift credentials from +environment variables in standard OpenStack form. Choose a number from +below, or type in your own value 1 / Enter swift credentials in the next +step  "false" 2 / Get swift credentials from environment vars. Leave +other fields blank if using this.  "true" env_auth> true User name to +log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> +Authentication URL for server (OS_AUTH_URL). Choose a number from below, +or type in your own value 1 / Rackspace US + "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK + "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 + "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK + "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 + "https://auth.storage.memset.com/v2.0" 6 / OVH + "https://auth.cloud.ovh.net/v3" 7 / Blomp Cloud Storage + "https://authenticate.ain.net" auth> User ID to log in - optional - +most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). +user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise -(OS_TENANT_NAME or OS_PROJECT_NAME). +(OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 +auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant +domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> +Region name - optional (OS_REGION_NAME) region> Storage URL - optional +(OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - +optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to +(1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) +Choose a number from below, or type in your own value 1 / Public +(default, choose this if not sure)  "public" 2 / Internal (use internal +service net)  "internal" 3 / Admin  "admin" endpoint_type> Remote config +-------------------- [test] env_auth = true user = key = auth = user_id += domain = tenant = tenant_id = tenant_domain = region = storage_url = +auth_token = auth_version = endpoint_type = -------------------- y) Yes +this is OK e) Edit this remote d) Delete this remote y/e/d> y -Properties: -- Config: tenant -- Env Var: RCLONE_SWIFT_TENANT -- Type: string -- Required: false + This remote is called `remote` and can now be used like this ---swift-tenant-id + See all containers -Tenant ID - optional for v1 auth, this or tenant required otherwise -(OS_TENANT_ID). + rclone lsd remote: -Properties: + Make a new container -- Config: tenant_id -- Env Var: RCLONE_SWIFT_TENANT_ID -- Type: string -- Required: false + rclone mkdir remote:container ---swift-tenant-domain + List the contents of a container -Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). + rclone ls remote:container -Properties: + Sync `/home/local/directory` to the remote container, deleting any + excess files in the container. -- Config: tenant_domain -- Env Var: RCLONE_SWIFT_TENANT_DOMAIN -- Type: string -- Required: false + rclone sync --interactive /home/local/directory remote:container ---swift-region + ### Configuration from an OpenStack credentials file -Region name - optional (OS_REGION_NAME). + An OpenStack credentials file typically looks something something + like this (without the comments) -Properties: +export OS_AUTH_URL=https://a.provider.net/v2.0 export +OS_TENANT_ID=ffffffffffffffffffffffffffffffff export +OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo +"Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT +export +OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" +]; then unset OS_REGION_NAME; fi -- Config: region -- Env Var: RCLONE_SWIFT_REGION -- Type: string -- Required: false ---swift-storage-url + The config file needs to look something like this where `$OS_USERNAME` + represents the value of the `OS_USERNAME` variable - `123abc567xy` in + the example above. -Storage URL - optional (OS_STORAGE_URL). +[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = +$OS_AUTH_URL tenant = $OS_TENANT_NAME -Properties: -- Config: storage_url -- Env Var: RCLONE_SWIFT_STORAGE_URL -- Type: string -- Required: false + Note that you may (or may not) need to set `region` too - try without first. ---swift-auth-token + ### Configuration from the environment -Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). + If you prefer you can configure rclone to use swift using a standard + set of OpenStack environment variables. -Properties: + When you run through the config, make sure you choose `true` for + `env_auth` and leave everything else blank. -- Config: auth_token -- Env Var: RCLONE_SWIFT_AUTH_TOKEN -- Type: string -- Required: false + rclone will then set any empty config parameters from the environment + using standard OpenStack environment variables. There is [a list of + the + variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) + in the docs for the swift library. ---swift-application-credential-id + ### Using an alternate authentication method -Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). + If your OpenStack installation uses a non-standard authentication method + that might not be yet supported by rclone or the underlying swift library, + you can authenticate externally (e.g. calling manually the `openstack` + commands to get a token). Then, you just need to pass the two + configuration variables ``auth_token`` and ``storage_url``. + If they are both provided, the other variables are ignored. rclone will + not try to authenticate but instead assume it is already authenticated + and use these two variables to access the OpenStack installation. -Properties: + #### Using rclone without a config file -- Config: application_credential_id -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID -- Type: string -- Required: false + You can use rclone with swift without a config file, if desired, like + this: ---swift-application-credential-name +source openstack-credentials-file export +RCLONE_CONFIG_MYREMOTE_TYPE=swift export +RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: -Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). -Properties: + ### --fast-list -- Config: application_credential_name -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME -- Type: string -- Required: false + This remote supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. ---swift-application-credential-secret + ### --update and --use-server-modtime -Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). + As noted below, the modified time is stored on metadata on the object. It is + used by default for all operations that require checking the time a file was + last updated. It allows rclone to treat the remote more like a true filesystem, + but it is inefficient because it requires an extra API call to retrieve the + metadata. -Properties: + For many operations, the time the object was last uploaded to the remote is + sufficient to determine if it is "dirty". By using `--update` along with + `--use-server-modtime`, you can avoid the extra API call and simply upload + files whose local modtime is newer than the time it was last uploaded. -- Config: application_credential_secret -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET -- Type: string -- Required: false + ### Modified time ---swift-auth-version + The modified time is stored as metadata on the object as + `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 + ns. -AuthVersion - optional - set to (1,2,3) if your auth URL has no version -(ST_AUTH_VERSION). + This is a de facto standard (used in the official python-swiftclient + amongst others) for storing the modification time for an object. -Properties: + ### Restricted filename characters -- Config: auth_version -- Env Var: RCLONE_SWIFT_AUTH_VERSION -- Type: int -- Default: 0 + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | NUL | 0x00 | ␀ | + | / | 0x2F | / | ---swift-endpoint-type + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). -Properties: + ### Standard options -- Config: endpoint_type -- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE -- Type: string -- Default: "public" -- Examples: - - "public" - - Public (default, choose this if not sure) - - "internal" - - Internal (use internal service net) - - "admin" - - Admin + Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). ---swift-storage-policy + #### --swift-env-auth -The storage policy to use when creating a new container. + Get swift credentials from environment variables in standard OpenStack form. -This applies the specified storage policy when creating a new container. -The policy cannot be changed afterwards. The allowed configuration -values and their meaning depend on your Swift storage provider. + Properties: -Properties: + - Config: env_auth + - Env Var: RCLONE_SWIFT_ENV_AUTH + - Type: bool + - Default: false + - Examples: + - "false" + - Enter swift credentials in the next step. + - "true" + - Get swift credentials from environment vars. + - Leave other fields blank if using this. -- Config: storage_policy -- Env Var: RCLONE_SWIFT_STORAGE_POLICY -- Type: string -- Required: false -- Examples: - - "" - - Default - - "pcs" - - OVH Public Cloud Storage - - "pca" - - OVH Public Cloud Archive + #### --swift-user -Advanced options + User name to log in (OS_USERNAME). -Here are the Advanced options specific to swift (OpenStack Swift -(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). + Properties: ---swift-leave-parts-on-error + - Config: user + - Env Var: RCLONE_SWIFT_USER + - Type: string + - Required: false -If true avoid calling abort upload on a failure. + #### --swift-key -It should be set to true for resuming uploads across different sessions. + API key or password (OS_PASSWORD). -Properties: + Properties: -- Config: leave_parts_on_error -- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false + - Config: key + - Env Var: RCLONE_SWIFT_KEY + - Type: string + - Required: false ---swift-chunk-size + #### --swift-auth -Above this size files will be chunked into a _segments container. + Authentication URL for server (OS_AUTH_URL). -Above this size files will be chunked into a _segments container. The -default for this is 5 GiB which is its maximum value. + Properties: -Properties: + - Config: auth + - Env Var: RCLONE_SWIFT_AUTH + - Type: string + - Required: false + - Examples: + - "https://auth.api.rackspacecloud.com/v1.0" + - Rackspace US + - "https://lon.auth.api.rackspacecloud.com/v1.0" + - Rackspace UK + - "https://identity.api.rackspacecloud.com/v2.0" + - Rackspace v2 + - "https://auth.storage.memset.com/v1.0" + - Memset Memstore UK + - "https://auth.storage.memset.com/v2.0" + - Memset Memstore UK v2 + - "https://auth.cloud.ovh.net/v3" + - OVH + - "https://authenticate.ain.net" + - Blomp Cloud Storage -- Config: chunk_size -- Env Var: RCLONE_SWIFT_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Gi + #### --swift-user-id ---swift-no-chunk + User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). -Don't chunk files during streaming upload. + Properties: -When doing streaming uploads (e.g. using rcat or mount) setting this -flag will cause the swift backend to not upload chunked files. + - Config: user_id + - Env Var: RCLONE_SWIFT_USER_ID + - Type: string + - Required: false -This will limit the maximum upload size to 5 GiB. However non chunked -files are easier to deal with and have an MD5SUM. + #### --swift-domain -Rclone will still chunk files bigger than chunk_size when doing normal -copy operations. + User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) -Properties: + Properties: -- Config: no_chunk -- Env Var: RCLONE_SWIFT_NO_CHUNK -- Type: bool -- Default: false + - Config: domain + - Env Var: RCLONE_SWIFT_DOMAIN + - Type: string + - Required: false ---swift-no-large-objects + #### --swift-tenant -Disable support for static and dynamic large objects + Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME). -Swift cannot transparently store files bigger than 5 GiB. There are two -schemes for doing that, static or dynamic large objects, and the API -does not allow rclone to determine whether a file is a static or dynamic -large object without doing a HEAD on the object. Since these need to be -treated differently, this means rclone has to issue HEAD requests for -objects for example when reading checksums. + Properties: -When no_large_objects is set, rclone will assume that there are no -static or dynamic large objects stored. This means it can stop doing the -extra HEAD calls which in turn increases performance greatly especially -when doing a swift to swift transfer with --checksum set. + - Config: tenant + - Env Var: RCLONE_SWIFT_TENANT + - Type: string + - Required: false -Setting this option implies no_chunk and also that no files will be -uploaded in chunks, so files bigger than 5 GiB will just fail on upload. + #### --swift-tenant-id -If you set this option and there are static or dynamic large objects, -then this will give incorrect hashes for them. Downloads will succeed, -but other operations such as Remove and Copy will fail. + Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID). -Properties: + Properties: -- Config: no_large_objects -- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS -- Type: bool -- Default: false + - Config: tenant_id + - Env Var: RCLONE_SWIFT_TENANT_ID + - Type: string + - Required: false ---swift-encoding + #### --swift-tenant-domain -The encoding for the backend. + Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). -See the encoding section in the overview for more info. + Properties: -Properties: + - Config: tenant_domain + - Env Var: RCLONE_SWIFT_TENANT_DOMAIN + - Type: string + - Required: false -- Config: encoding -- Env Var: RCLONE_SWIFT_ENCODING -- Type: MultiEncoder -- Default: Slash,InvalidUtf8 + #### --swift-region -Limitations + Region name - optional (OS_REGION_NAME). -The Swift API doesn't return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won't check or use the -MD5SUM for these. + Properties: -Troubleshooting + - Config: region + - Env Var: RCLONE_SWIFT_REGION + - Type: string + - Required: false -Rclone gives Failed to create file system for "remote:": Bad Request + #### --swift-storage-url -Due to an oddity of the underlying swift library, it gives a "Bad -Request" error rather than a more sensible error when the authentication -fails for Swift. + Storage URL - optional (OS_STORAGE_URL). -So this most likely means your username / password is wrong. You can -investigate further with the --dump-bodies flag. + Properties: -This may also be caused by specifying the region when you shouldn't have -(e.g. OVH). + - Config: storage_url + - Env Var: RCLONE_SWIFT_STORAGE_URL + - Type: string + - Required: false -Rclone gives Failed to create file system: Response didn't have storage url and auth token + #### --swift-auth-token -This is most likely caused by forgetting to specify your tenant when -setting up a swift remote. + Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). -OVH Cloud Archive + Properties: -To use rclone with OVH cloud archive, first use rclone config to set up -a swift backend with OVH, choosing pca as the storage_policy. + - Config: auth_token + - Env Var: RCLONE_SWIFT_AUTH_TOKEN + - Type: string + - Required: false -Uploading Objects + #### --swift-application-credential-id -Uploading objects to OVH cloud archive is no different to object -storage, you just simply run the command you like (move, copy or sync) -to upload the objects. Once uploaded the objects will show in a "Frozen" -state within the OVH control panel. + Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). -Retrieving Objects + Properties: -To retrieve objects use rclone copy as normal. If the objects are in a -frozen state then rclone will ask for them all to be unfrozen and it -will wait at the end of the output with a message like the following: + - Config: application_credential_id + - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID + - Type: string + - Required: false -2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) + #### --swift-application-credential-name -Rclone will wait for the time specified then retry the copy. + Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). -pCloud + Properties: -Paths are specified as remote:path + - Config: application_credential_name + - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME + - Type: string + - Required: false -Paths may be as deep as required, e.g. remote:directory/subdirectory. + #### --swift-application-credential-secret -Configuration + Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). -The initial setup for pCloud involves getting a token from pCloud which -you need to do in your browser. rclone config walks you through it. + Properties: -Here is an example of how to make a remote called remote. First run: + - Config: application_credential_secret + - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET + - Type: string + - Required: false - rclone config + #### --swift-auth-version -This will guide you through an interactive setup process: + AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION). - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Pcloud - \ "pcloud" - [snip] - Storage> pcloud - Pcloud App Client Id - leave blank normally. - client_id> - Pcloud App Client Secret - leave blank normally. - client_secret> - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = - client_secret = - token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + Properties: -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + - Config: auth_version + - Env Var: RCLONE_SWIFT_AUTH_VERSION + - Type: int + - Default: 0 -Note that rclone runs a webserver on your local machine to collect the -token as returned from pCloud. This only runs from the moment it opens -your browser to the moment you get back the verification code. This is -on http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall. + #### --swift-endpoint-type -Once configured you can then use rclone like this, + Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). -List directories in top level of your pCloud + Properties: - rclone lsd remote: + - Config: endpoint_type + - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE + - Type: string + - Default: "public" + - Examples: + - "public" + - Public (default, choose this if not sure) + - "internal" + - Internal (use internal service net) + - "admin" + - Admin -List all the files in your pCloud + #### --swift-storage-policy - rclone ls remote: + The storage policy to use when creating a new container. -To copy a local directory to a pCloud directory called backup + This applies the specified storage policy when creating a new + container. The policy cannot be changed afterwards. The allowed + configuration values and their meaning depend on your Swift storage + provider. - rclone copy /home/source remote:backup + Properties: -Modified time and hashes + - Config: storage_policy + - Env Var: RCLONE_SWIFT_STORAGE_POLICY + - Type: string + - Required: false + - Examples: + - "" + - Default + - "pcs" + - OVH Public Cloud Storage + - "pca" + - OVH Public Cloud Archive -pCloud allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. In order to set a Modification time pCloud requires the object be -re-uploaded. + ### Advanced options -pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and -SHA256 hashes in the EU region, so you can use the --checksum flag. + Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). -Restricted filename characters + #### --swift-leave-parts-on-error -In addition to the default restricted characters set the following -characters are also replaced: + If true avoid calling abort upload on a failure. - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ + It should be set to true for resuming uploads across different sessions. -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + Properties: -Deleting files + - Config: leave_parts_on_error + - Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR + - Type: bool + - Default: false -Deleted files will be moved to the trash. Your subscription level will -determine how long items stay in the trash. rclone cleanup can be used -to empty the trash. + #### --swift-chunk-size -Emptying the trash + Above this size files will be chunked into a _segments container. -Due to an API limitation, the rclone cleanup command will only work if -you set your username and password in the advanced options for this -backend. Since we generally want to avoid storing user passwords in the -rclone config file, we advise you to only set this up if you need the -rclone cleanup command to work. + Above this size files will be chunked into a _segments container. The + default for this is 5 GiB which is its maximum value. -Root folder ID + Properties: -You can set the root_folder_id for rclone. This is the directory -(identified by its Folder ID) that rclone considers to be the root of -your pCloud drive. + - Config: chunk_size + - Env Var: RCLONE_SWIFT_CHUNK_SIZE + - Type: SizeSuffix + - Default: 5Gi -Normally you will leave this blank and rclone will determine the correct -root to use itself. + #### --swift-no-chunk -However you can set this to restrict rclone to a specific folder -hierarchy. + Don't chunk files during streaming upload. -In order to do this you will have to find the Folder ID of the directory -you wish rclone to display. This will be the folder field of the URL -when you open the relevant folder in the pCloud web interface. + When doing streaming uploads (e.g. using rcat or mount) setting this + flag will cause the swift backend to not upload chunked files. -So if the folder you want rclone to use has a URL which looks like -https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid -in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the -config. + This will limit the maximum upload size to 5 GiB. However non chunked + files are easier to deal with and have an MD5SUM. -Standard options + Rclone will still chunk files bigger than chunk_size when doing normal + copy operations. -Here are the Standard options specific to pcloud (Pcloud). + Properties: ---pcloud-client-id + - Config: no_chunk + - Env Var: RCLONE_SWIFT_NO_CHUNK + - Type: bool + - Default: false -OAuth Client Id. + #### --swift-no-large-objects -Leave blank normally. + Disable support for static and dynamic large objects -Properties: + Swift cannot transparently store files bigger than 5 GiB. There are + two schemes for doing that, static or dynamic large objects, and the + API does not allow rclone to determine whether a file is a static or + dynamic large object without doing a HEAD on the object. Since these + need to be treated differently, this means rclone has to issue HEAD + requests for objects for example when reading checksums. -- Config: client_id -- Env Var: RCLONE_PCLOUD_CLIENT_ID -- Type: string -- Required: false + When `no_large_objects` is set, rclone will assume that there are no + static or dynamic large objects stored. This means it can stop doing + the extra HEAD calls which in turn increases performance greatly + especially when doing a swift to swift transfer with `--checksum` set. ---pcloud-client-secret + Setting this option implies `no_chunk` and also that no files will be + uploaded in chunks, so files bigger than 5 GiB will just fail on + upload. -OAuth Client Secret. + If you set this option and there *are* static or dynamic large objects, + then this will give incorrect hashes for them. Downloads will succeed, + but other operations such as Remove and Copy will fail. -Leave blank normally. -Properties: + Properties: -- Config: client_secret -- Env Var: RCLONE_PCLOUD_CLIENT_SECRET -- Type: string -- Required: false + - Config: no_large_objects + - Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS + - Type: bool + - Default: false -Advanced options + #### --swift-encoding -Here are the Advanced options specific to pcloud (Pcloud). + The encoding for the backend. ---pcloud-token + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -OAuth Access Token as a JSON blob. + Properties: -Properties: + - Config: encoding + - Env Var: RCLONE_SWIFT_ENCODING + - Type: MultiEncoder + - Default: Slash,InvalidUtf8 -- Config: token -- Env Var: RCLONE_PCLOUD_TOKEN -- Type: string -- Required: false ---pcloud-auth-url -Auth server URL. + ## Limitations -Leave blank to use the provider defaults. + The Swift API doesn't return a correct MD5SUM for segmented files + (Dynamic or Static Large Objects) so rclone won't check or use the + MD5SUM for these. -Properties: + ## Troubleshooting -- Config: auth_url -- Env Var: RCLONE_PCLOUD_AUTH_URL -- Type: string -- Required: false + ### Rclone gives Failed to create file system for "remote:": Bad Request ---pcloud-token-url + Due to an oddity of the underlying swift library, it gives a "Bad + Request" error rather than a more sensible error when the + authentication fails for Swift. -Token server url. + So this most likely means your username / password is wrong. You can + investigate further with the `--dump-bodies` flag. -Leave blank to use the provider defaults. + This may also be caused by specifying the region when you shouldn't + have (e.g. OVH). -Properties: + ### Rclone gives Failed to create file system: Response didn't have storage url and auth token -- Config: token_url -- Env Var: RCLONE_PCLOUD_TOKEN_URL -- Type: string -- Required: false + This is most likely caused by forgetting to specify your tenant when + setting up a swift remote. ---pcloud-encoding + ## OVH Cloud Archive -The encoding for the backend. + To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`. -See the encoding section in the overview for more info. + ### Uploading Objects -Properties: + Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel. -- Config: encoding -- Env Var: RCLONE_PCLOUD_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + ### Retrieving Objects ---pcloud-root-folder-id + To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: -Fill in for rclone to use a non root folder as its starting point. + `2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)` -Properties: + Rclone will wait for the time specified then retry the copy. -- Config: root_folder_id -- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID -- Type: string -- Default: "d0" + # pCloud ---pcloud-hostname + Paths are specified as `remote:path` -Hostname to connect to. + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -This is normally set when rclone initially does the oauth connection, -however you will need to set it by hand if you are using remote config -with rclone authorize. + ## Configuration -Properties: + The initial setup for pCloud involves getting a token from pCloud which you + need to do in your browser. `rclone config` walks you through it. -- Config: hostname -- Env Var: RCLONE_PCLOUD_HOSTNAME -- Type: string -- Default: "api.pcloud.com" -- Examples: - - "api.pcloud.com" - - Original/US region - - "eapi.pcloud.com" - - EU region + Here is an example of how to make a remote called `remote`. First run: ---pcloud-username + rclone config -Your pcloud username. + This will guide you through an interactive setup process: -This is only required when you want to use the cleanup command. Due to a -bug in the pcloud API the required API does not support OAuth -authentication so we have to rely on user password authentication for -it. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Pcloud  "pcloud" [snip] Storage> pcloud Pcloud App Client Id - +leave blank normally. client_id> Pcloud App Client Secret - leave blank +normally. client_secret> Remote config Use web browser to automatically +authenticate rclone with remote? * Say Y if the machine running rclone +has a web browser you can use * Say N if running rclone on a (remote) +machine without web browser access If not sure try Y. If Y failed, try +N. y) Yes n) No y/n> y If your browser doesn't open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... Got code -------------------- +[remote] client_id = client_secret = token = +{"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y -Properties: -- Config: username -- Env Var: RCLONE_PCLOUD_USERNAME -- Type: string -- Required: false + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. ---pcloud-password + Note that rclone runs a webserver on your local machine to collect the + token as returned from pCloud. This only runs from the moment it opens + your browser to the moment you get back the verification code. This + is on `http://127.0.0.1:53682/` and this it may require you to unblock + it temporarily if you are running a host firewall. -Your pcloud password. + Once configured you can then use `rclone` like this, -NB Input to this must be obscured - see rclone obscure. + List directories in top level of your pCloud -Properties: + rclone lsd remote: -- Config: password -- Env Var: RCLONE_PCLOUD_PASSWORD -- Type: string -- Required: false + List all the files in your pCloud -PikPak + rclone ls remote: -PikPak is a private cloud drive. + To copy a local directory to a pCloud directory called backup -Paths are specified as remote:path, and may be as deep as required, e.g. -remote:directory/subdirectory. + rclone copy /home/source remote:backup -Configuration + ### Modified time and hashes ### -Here is an example of making a remote for PikPak. + pCloud allows modification times to be set on objects accurate to 1 + second. These will be used to detect whether objects need syncing or + not. In order to set a Modification time pCloud requires the object + be re-uploaded. -First run: + pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 + hashes in the EU region, so you can use the `--checksum` flag. - rclone config + ### Restricted filename characters -This will guide you through an interactive setup process: + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | - Enter name for new remote. - name> remote + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - XX / PikPak - \ (pikpak) - Storage> XX + ### Deleting files + + Deleted files will be moved to the trash. Your subscription level + will determine how long items stay in the trash. `rclone cleanup` can + be used to empty the trash. + + ### Emptying the trash + + Due to an API limitation, the `rclone cleanup` command will only work if you + set your username and password in the advanced options for this backend. + Since we generally want to avoid storing user passwords in the rclone config + file, we advise you to only set this up if you need the `rclone cleanup` command to work. + + ### Root folder ID + + You can set the `root_folder_id` for rclone. This is the directory + (identified by its `Folder ID`) that rclone considers to be the root + of your pCloud drive. + + Normally you will leave this blank and rclone will determine the + correct root to use itself. + + However you can set this to restrict rclone to a specific folder + hierarchy. + + In order to do this you will have to find the `Folder ID` of the + directory you wish rclone to display. This will be the `folder` field + of the URL when you open the relevant folder in the pCloud web + interface. + + So if the folder you want rclone to use has a URL which looks like + `https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid` + in the browser, then you use `5xxxxxxxx8` as + the `root_folder_id` in the config. + + + ### Standard options + + Here are the Standard options specific to pcloud (Pcloud). + + #### --pcloud-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_PCLOUD_CLIENT_ID + - Type: string + - Required: false + + #### --pcloud-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_PCLOUD_CLIENT_SECRET + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to pcloud (Pcloud). + + #### --pcloud-token + + OAuth Access Token as a JSON blob. + + Properties: + + - Config: token + - Env Var: RCLONE_PCLOUD_TOKEN + - Type: string + - Required: false + + #### --pcloud-auth-url + + Auth server URL. + + Leave blank to use the provider defaults. + + Properties: + + - Config: auth_url + - Env Var: RCLONE_PCLOUD_AUTH_URL + - Type: string + - Required: false + + #### --pcloud-token-url + + Token server url. + + Leave blank to use the provider defaults. + + Properties: + + - Config: token_url + - Env Var: RCLONE_PCLOUD_TOKEN_URL + - Type: string + - Required: false + + #### --pcloud-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_PCLOUD_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + #### --pcloud-root-folder-id + + Fill in for rclone to use a non root folder as its starting point. + + Properties: + + - Config: root_folder_id + - Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID + - Type: string + - Default: "d0" + + #### --pcloud-hostname + + Hostname to connect to. + + This is normally set when rclone initially does the oauth connection, + however you will need to set it by hand if you are using remote config + with rclone authorize. + + + Properties: + + - Config: hostname + - Env Var: RCLONE_PCLOUD_HOSTNAME + - Type: string + - Default: "api.pcloud.com" + - Examples: + - "api.pcloud.com" + - Original/US region + - "eapi.pcloud.com" + - EU region + + #### --pcloud-username + + Your pcloud username. + + This is only required when you want to use the cleanup command. Due to a bug + in the pcloud API the required API does not support OAuth authentication so + we have to rely on user password authentication for it. + + Properties: + + - Config: username + - Env Var: RCLONE_PCLOUD_USERNAME + - Type: string + - Required: false + + #### --pcloud-password + + Your pcloud password. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_PCLOUD_PASSWORD + - Type: string + - Required: false + + + + # PikPak + + PikPak is [a private cloud drive](https://mypikpak.com/). + + Paths are specified as `remote:path`, and may be as deep as required, e.g. `remote:directory/subdirectory`. + + ## Configuration + + Here is an example of making a remote for PikPak. + + First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n + +Enter name for new remote. name> remote + +Option Storage. Type of storage to configure. Choose a number from +below, or type in your own value. XX / PikPak  (pikpak) Storage> XX + +Option user. Pikpak username. Enter a value. user> USERNAME + +Option pass. Pikpak password. Choose an alternative below. y) Yes, type +in my own password g) Generate random password y/g> y Enter the +password: password: Confirm the password: password: + +Edit advanced config? y) Yes n) No (default) y/n> + +Configuration complete. Options: - type: pikpak - user: USERNAME - pass: +*** ENCRYPTED *** - token: +{"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} +Keep this "remote" remote? y) Yes this is OK (default) e) Edit this +remote d) Delete this remote y/e/d> y + + + + ### Standard options + + Here are the Standard options specific to pikpak (PikPak). + + #### --pikpak-user - Option user. Pikpak username. - Enter a value. - user> USERNAME - Option pass. + Properties: + + - Config: user + - Env Var: RCLONE_PIKPAK_USER + - Type: string + - Required: true + + #### --pikpak-pass + Pikpak password. - Choose an alternative below. - y) Yes, type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Edit advanced config? - y) Yes - n) No (default) - y/n> + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - Configuration complete. - Options: - - type: pikpak - - user: USERNAME - - pass: *** ENCRYPTED *** - - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} - Keep this "remote" remote? - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y + Properties: -Standard options + - Config: pass + - Env Var: RCLONE_PIKPAK_PASS + - Type: string + - Required: true -Here are the Standard options specific to pikpak (PikPak). + ### Advanced options ---pikpak-user + Here are the Advanced options specific to pikpak (PikPak). -Pikpak username. + #### --pikpak-client-id -Properties: + OAuth Client Id. -- Config: user -- Env Var: RCLONE_PIKPAK_USER -- Type: string -- Required: true + Leave blank normally. ---pikpak-pass + Properties: -Pikpak password. + - Config: client_id + - Env Var: RCLONE_PIKPAK_CLIENT_ID + - Type: string + - Required: false -NB Input to this must be obscured - see rclone obscure. + #### --pikpak-client-secret -Properties: + OAuth Client Secret. -- Config: pass -- Env Var: RCLONE_PIKPAK_PASS -- Type: string -- Required: true + Leave blank normally. -Advanced options + Properties: -Here are the Advanced options specific to pikpak (PikPak). + - Config: client_secret + - Env Var: RCLONE_PIKPAK_CLIENT_SECRET + - Type: string + - Required: false ---pikpak-client-id + #### --pikpak-token -OAuth Client Id. + OAuth Access Token as a JSON blob. -Leave blank normally. + Properties: -Properties: + - Config: token + - Env Var: RCLONE_PIKPAK_TOKEN + - Type: string + - Required: false -- Config: client_id -- Env Var: RCLONE_PIKPAK_CLIENT_ID -- Type: string -- Required: false + #### --pikpak-auth-url ---pikpak-client-secret + Auth server URL. -OAuth Client Secret. + Leave blank to use the provider defaults. -Leave blank normally. + Properties: -Properties: + - Config: auth_url + - Env Var: RCLONE_PIKPAK_AUTH_URL + - Type: string + - Required: false -- Config: client_secret -- Env Var: RCLONE_PIKPAK_CLIENT_SECRET -- Type: string -- Required: false + #### --pikpak-token-url ---pikpak-token + Token server url. -OAuth Access Token as a JSON blob. + Leave blank to use the provider defaults. -Properties: + Properties: -- Config: token -- Env Var: RCLONE_PIKPAK_TOKEN -- Type: string -- Required: false + - Config: token_url + - Env Var: RCLONE_PIKPAK_TOKEN_URL + - Type: string + - Required: false ---pikpak-auth-url + #### --pikpak-root-folder-id -Auth server URL. + ID of the root folder. + Leave blank normally. -Leave blank to use the provider defaults. + Fill in for rclone to use a non root folder as its starting point. -Properties: -- Config: auth_url -- Env Var: RCLONE_PIKPAK_AUTH_URL -- Type: string -- Required: false + Properties: ---pikpak-token-url + - Config: root_folder_id + - Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID + - Type: string + - Required: false -Token server url. + #### --pikpak-use-trash -Leave blank to use the provider defaults. + Send files to the trash instead of deleting permanently. -Properties: + Defaults to true, namely sending files to the trash. + Use `--pikpak-use-trash=false` to delete files permanently instead. -- Config: token_url -- Env Var: RCLONE_PIKPAK_TOKEN_URL -- Type: string -- Required: false + Properties: ---pikpak-root-folder-id + - Config: use_trash + - Env Var: RCLONE_PIKPAK_USE_TRASH + - Type: bool + - Default: true -ID of the root folder. Leave blank normally. + #### --pikpak-trashed-only -Fill in for rclone to use a non root folder as its starting point. + Only show files that are in the trash. -Properties: + This will show trashed files in their original directory structure. -- Config: root_folder_id -- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID -- Type: string -- Required: false + Properties: ---pikpak-use-trash + - Config: trashed_only + - Env Var: RCLONE_PIKPAK_TRASHED_ONLY + - Type: bool + - Default: false -Send files to the trash instead of deleting permanently. + #### --pikpak-hash-memory-limit -Defaults to true, namely sending files to the trash. Use ---pikpak-use-trash=false to delete files permanently instead. + Files bigger than this will be cached on disk to calculate hash if required. -Properties: + Properties: -- Config: use_trash -- Env Var: RCLONE_PIKPAK_USE_TRASH -- Type: bool -- Default: true + - Config: hash_memory_limit + - Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT + - Type: SizeSuffix + - Default: 10Mi ---pikpak-trashed-only + #### --pikpak-encoding -Only show files that are in the trash. + The encoding for the backend. -This will show trashed files in their original directory structure. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Properties: + Properties: -- Config: trashed_only -- Env Var: RCLONE_PIKPAK_TRASHED_ONLY -- Type: bool -- Default: false + - Config: encoding + - Env Var: RCLONE_PIKPAK_ENCODING + - Type: MultiEncoder + - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot ---pikpak-hash-memory-limit + ## Backend commands -Files bigger than this will be cached on disk to calculate hash if -required. + Here are the commands specific to the pikpak backend. -Properties: + Run them with -- Config: hash_memory_limit -- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT -- Type: SizeSuffix -- Default: 10Mi + rclone backend COMMAND remote: ---pikpak-encoding + The help below will explain what arguments each command takes. -The encoding for the backend. + See the [backend](https://rclone.org/commands/rclone_backend/) command for more + info on how to pass options and arguments. -See the encoding section in the overview for more info. + These can be run on a running backend using the rc command + [backend/command](https://rclone.org/rc/#backend-command). -Properties: + ### addurl -- Config: encoding -- Env Var: RCLONE_PIKPAK_ENCODING -- Type: MultiEncoder -- Default: - Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot + Add offline download task for url -Backend commands + rclone backend addurl remote: [options] [+] -Here are the commands specific to the pikpak backend. + This command adds offline download task for url. -Run them with + Usage: - rclone backend COMMAND remote: + rclone backend addurl pikpak:dirpath url -The help below will explain what arguments each command takes. + Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, + download will fallback to default 'My Pack' folder. -See the backend command for more info on how to pass options and -arguments. -These can be run on a running backend using the rc command -backend/command. + ### decompress -addurl + Request decompress of a file/files in a folder -Add offline download task for url + rclone backend decompress remote: [options] [+] - rclone backend addurl remote: [options] [+] + This command requests decompress of file/files in a folder. -This command adds offline download task for url. + Usage: -Usage: + rclone backend decompress pikpak:dirpath {filename} -o password=password + rclone backend decompress pikpak:dirpath {filename} -o delete-src-file - rclone backend addurl pikpak:dirpath url + An optional argument 'filename' can be specified for a file located in + 'pikpak:dirpath'. You may want to pass '-o password=password' for a + password-protected files. Also, pass '-o delete-src-file' to delete + source files after decompression finished. -Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download -will fallback to default 'My Pack' folder. + Result: -decompress + { + "Decompressed": 17, + "SourceDeleted": 0, + "Errors": 0 + } -Request decompress of a file/files in a folder - rclone backend decompress remote: [options] [+] -This command requests decompress of file/files in a folder. -Usage: + ## Limitations ## - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file + ### Hashes ### -An optional argument 'filename' can be specified for a file located in -'pikpak:dirpath'. You may want to pass '-o password=password' for a -password-protected files. Also, pass '-o delete-src-file' to delete -source files after decompression finished. + PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files. -Result: + ### Deleted files ### - { - "Decompressed": 17, - "SourceDeleted": 0, - "Errors": 0 - } + Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days. -Limitations + # premiumize.me -Hashes + Paths are specified as `remote:path` -PikPak supports MD5 hash, but sometimes given empty especially for -user-uploaded files. + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -Deleted files + ## Configuration -Deleted files will still be visible with --pikpak-trashed-only even -after the trash emptied. This goes away after few days. + The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you + need to do in your browser. `rclone config` walks you through it. -premiumize.me + Here is an example of how to make a remote called `remote`. First run: -Paths are specified as remote:path + rclone config -Paths may be as deep as required, e.g. remote:directory/subdirectory. + This will guide you through an interactive setup process: -Configuration +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / +premiumize.me  "premiumizeme" [snip] Storage> premiumizeme ** See help +for premiumizeme backend at: https://rclone.org/premiumizeme/ ** -The initial setup for premiumize.me involves getting a token from -premiumize.me which you need to do in your browser. rclone config walks -you through it. +Remote config Use web browser to automatically authenticate rclone with +remote? * Say Y if the machine running rclone has a web browser you can +use * Say N if running rclone on a (remote) machine without web browser +access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If +your browser doesn't open automatically go to the following link: +http://127.0.0.1:53682/auth Log in and authorize rclone for access +Waiting for code... Got code -------------------- [remote] type = +premiumizeme token = +{"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> -Here is an example of how to make a remote called remote. First run: - rclone config + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. -This will guide you through an interactive setup process: + Note that rclone runs a webserver on your local machine to collect the + token as returned from premiumize.me. This only runs from the moment it opens + your browser to the moment you get back the verification code. This + is on `http://127.0.0.1:53682/` and this it may require you to unblock + it temporarily if you are running a host firewall. - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / premiumize.me - \ "premiumizeme" - [snip] - Storage> premiumizeme - ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** + Once configured you can then use `rclone` like this, - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - type = premiumizeme - token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> + List directories in top level of your premiumize.me -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + rclone lsd remote: -Note that rclone runs a webserver on your local machine to collect the -token as returned from premiumize.me. This only runs from the moment it -opens your browser to the moment you get back the verification code. -This is on http://127.0.0.1:53682/ and this it may require you to -unblock it temporarily if you are running a host firewall. + List all the files in your premiumize.me -Once configured you can then use rclone like this, + rclone ls remote: -List directories in top level of your premiumize.me + To copy a local directory to an premiumize.me directory called backup - rclone lsd remote: + rclone copy /home/source remote:backup -List all the files in your premiumize.me + ### Modified time and hashes - rclone ls remote: + premiumize.me does not support modification times or hashes, therefore + syncing will default to `--size-only` checking. Note that using + `--update` will work. -To copy a local directory to an premiumize.me directory called backup + ### Restricted filename characters - rclone copy /home/source remote:backup + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: -Modified time and hashes + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | + | " | 0x22 | " | -premiumize.me does not support modification times or hashes, therefore -syncing will default to --size-only checking. Note that using --update -will work. + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. -Restricted filename characters -In addition to the default restricted characters set the following -characters are also replaced: + ### Standard options - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - " 0x22 " + Here are the Standard options specific to premiumizeme (premiumize.me). -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + #### --premiumizeme-client-id -Standard options + OAuth Client Id. -Here are the Standard options specific to premiumizeme (premiumize.me). + Leave blank normally. ---premiumizeme-api-key + Properties: -API Key. + - Config: client_id + - Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID + - Type: string + - Required: false -This is not normally used - use oauth instead. + #### --premiumizeme-client-secret -Properties: + OAuth Client Secret. -- Config: api_key -- Env Var: RCLONE_PREMIUMIZEME_API_KEY -- Type: string -- Required: false - -Advanced options - -Here are the Advanced options specific to premiumizeme (premiumize.me). - ---premiumizeme-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. + Leave blank normally. -Properties: + Properties: -- Config: encoding -- Env Var: RCLONE_PREMIUMIZEME_ENCODING -- Type: MultiEncoder -- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot + - Config: client_secret + - Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET + - Type: string + - Required: false -Limitations + #### --premiumizeme-api-key -Note that premiumize.me is case insensitive so you can't have a file -called "Hello.doc" and one called "hello.doc". + API Key. -premiumize.me file names can't have the \ or " characters in. rclone -maps these to and from an identical looking unicode equivalents \ and -" + This is not normally used - use oauth instead. -premiumize.me only supports filenames up to 255 characters in length. -put.io + Properties: -Paths are specified as remote:path + - Config: api_key + - Env Var: RCLONE_PREMIUMIZEME_API_KEY + - Type: string + - Required: false -put.io paths may be as deep as required, e.g. -remote:directory/subdirectory. + ### Advanced options -Configuration + Here are the Advanced options specific to premiumizeme (premiumize.me). -The initial setup for put.io involves getting a token from put.io which -you need to do in your browser. rclone config walks you through it. + #### --premiumizeme-token -Here is an example of how to make a remote called remote. First run: + OAuth Access Token as a JSON blob. - rclone config + Properties: -This will guide you through an interactive setup process: + - Config: token + - Env Var: RCLONE_PREMIUMIZEME_TOKEN + - Type: string + - Required: false - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> putio - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Put.io - \ "putio" - [snip] - Storage> putio - ** See help for putio backend at: https://rclone.org/putio/ ** + #### --premiumizeme-auth-url - Remote config - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [putio] - type = putio - token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - Current remotes: + Auth server URL. - Name Type - ==== ==== - putio putio + Leave blank to use the provider defaults. - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> q + Properties: -See the remote setup docs for how to set it up on a machine with no -Internet browser available. + - Config: auth_url + - Env Var: RCLONE_PREMIUMIZEME_AUTH_URL + - Type: string + - Required: false -Note that rclone runs a webserver on your local machine to collect the -token as returned from put.io if using web browser to automatically -authenticate. This only runs from the moment it opens your browser to -the moment you get back the verification code. This is on -http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall, or use manual mode. + #### --premiumizeme-token-url -You can then use it like this, + Token server url. -List directories in top level of your put.io + Leave blank to use the provider defaults. - rclone lsd remote: - -List all the files in your put.io - - rclone ls remote: - -To copy a local directory to a put.io directory called backup - - rclone copy /home/source remote:backup - -Restricted filename characters - -In addition to the default restricted characters set the following -characters are also replaced: - - Character Value Replacement - ----------- ------- ------------- - \ 0x5C \ - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Advanced options - -Here are the Advanced options specific to putio (Put.io). - ---putio-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_PUTIO_ENCODING -- Type: MultiEncoder -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - -Limitations - -put.io has rate limiting. When you hit a limit, rclone automatically -retries after waiting the amount of time requested by the server. - -If you want to avoid ever hitting these limits, you may use the ---tpslimit flag with a low number. Note that the imposed limits may be -different for different operations, and may change over time. - -Seafile - -This is a backend for the Seafile storage service: - It works with both -the free community edition or the professional edition. - Seafile -versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries -are also supported. - It supports 2FA enabled users - Using a Library -API Token is not supported - -Configuration - -There are two distinct modes you can setup your remote: - you point your -remote to the root of the server, meaning you don't specify a library -during the configuration: Paths are specified as remote:library. You may -put subdirectories in too, e.g. remote:library/path/to/dir. - you point -your remote to a specific library during the configuration: Paths are -specified as remote:path/to/dir. This is the recommended mode when using -encrypted libraries. (This mode is possibly slightly faster than the -root mode) - -Configuration in root mode - -Here is an example of making a seafile configuration for a user with no -two-factor authentication. First run - - rclone config - -This will guide you through an interactive setup process. To -authenticate you will need the URL of your server, your email (or -username) and your password. - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> seafile - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Seafile - \ "seafile" - [snip] - Storage> seafile - ** See help for seafile backend at: https://rclone.org/seafile/ ** - - URL of seafile host to connect to - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Connect to cloud.seafile.com - \ "https://cloud.seafile.com/" - url> http://my.seafile.server/ - User name (usually email address) - Enter a string value. Press Enter for the default (""). - user> me@example.com - Password - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g> y - Enter the password: - password: - Confirm the password: - password: - Two-factor authentication ('true' if the account has 2FA enabled) - Enter a boolean value (true or false). Press Enter for the default ("false"). - 2fa> false - Name of the library. Leave blank to access all non-encrypted libraries. - Enter a string value. Press Enter for the default (""). - library> - Library password (for encrypted libraries only). Leave blank if you pass it through the command line. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g/n> n - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - Two-factor authentication is not enabled on this account. - -------------------- - [seafile] - type = seafile - url = http://my.seafile.server/ - user = me@example.com - pass = *** ENCRYPTED *** - 2fa = false - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called seafile. It's pointing to the root of your seafile -server and can now be used like this: - -See all libraries - - rclone lsd seafile: - -Create a new library - - rclone mkdir seafile:library - -List the contents of a library - - rclone ls seafile:library - -Sync /home/local/directory to the remote library, deleting any excess -files in the library. - - rclone sync --interactive /home/local/directory seafile:library - -Configuration in library mode - -Here's an example of a configuration in library mode with a user that -has the two-factor authentication enabled. Your 2FA code will be asked -at the end of the configuration, and will attempt to authenticate you: - - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> seafile - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - [snip] - XX / Seafile - \ "seafile" - [snip] - Storage> seafile - ** See help for seafile backend at: https://rclone.org/seafile/ ** - - URL of seafile host to connect to - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Connect to cloud.seafile.com - \ "https://cloud.seafile.com/" - url> http://my.seafile.server/ - User name (usually email address) - Enter a string value. Press Enter for the default (""). - user> me@example.com - Password - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g> y - Enter the password: - password: - Confirm the password: - password: - Two-factor authentication ('true' if the account has 2FA enabled) - Enter a boolean value (true or false). Press Enter for the default ("false"). - 2fa> true - Name of the library. Leave blank to access all non-encrypted libraries. - Enter a string value. Press Enter for the default (""). - library> My Library - Library password (for encrypted libraries only). Leave blank if you pass it through the command line. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank (default) - y/g/n> n - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - Two-factor authentication: please enter your 2FA code - 2fa code> 123456 - Authenticating... - Success! - -------------------- - [seafile] - type = seafile - url = http://my.seafile.server/ - user = me@example.com - pass = - 2fa = true - library = My Library - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - -You'll notice your password is blank in the configuration. It's because -we only need the password to authenticate you once. + Properties: -You specified My Library during the configuration. The root of the -remote is pointing at the root of the library My Library: + - Config: token_url + - Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL + - Type: string + - Required: false -See all files in the library: + #### --premiumizeme-encoding - rclone lsd seafile: + The encoding for the backend. -Create a new directory inside the library + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - rclone mkdir seafile:directory + Properties: -List the contents of a directory + - Config: encoding + - Env Var: RCLONE_PREMIUMIZEME_ENCODING + - Type: MultiEncoder + - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot - rclone ls seafile:directory -Sync /home/local/directory to the remote library, deleting any excess -files in the library. - rclone sync --interactive /home/local/directory seafile: + ## Limitations ---fast-list + Note that premiumize.me is case insensitive so you can't have a file called + "Hello.doc" and one called "hello.doc". -Seafile version 7+ supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. Please note this is not supported on seafile server version 6.x + premiumize.me file names can't have the `\` or `"` characters in. + rclone maps these to and from an identical looking unicode equivalents + `\` and `"` -Restricted filename characters + premiumize.me only supports filenames up to 255 characters in length. -In addition to the default restricted characters set the following -characters are also replaced: + # Proton Drive - Character Value Replacement - ----------- ------- ------------- - / 0x2F / - " 0x22 " - \ 0x5C \ + [Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. + This is an rclone backend for Proton Drive which supports the file transfer + features of Proton Drive using the same client-side encryption. -Seafile and rclone link + Due to the fact that Proton Drive doesn't publish its API documentation, this + backend is implemented with best efforts by reading the open-sourced client + source code and observing the Proton Drive traffic in the browser. -Rclone supports generating share links for non-encrypted libraries only. -They can either be for a file or a directory: + **NB** This backend is currently in Beta. It is believed to be correct + and all the integration tests pass. However the Proton Drive protocol + has evolved over time there may be accounts it is not compatible + with. Please [post on the rclone forum](https://forum.rclone.org/) if + you find an incompatibility. - rclone link seafile:seafile-tutorial.doc - http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ + Paths are specified as `remote:path` -or if run on a directory you will get: + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. - rclone link seafile:dir - http://my.seafile.server/d/9ea2455f6f55478bbb0d/ + ## Configurations -Please note a share link is unique for each file or directory. If you -run a link command on a file/dir that has already been shared, you will -get the exact same link. + Here is an example of how to make a remote called `remote`. First run: -Compatibility + rclone config -It has been actively developed using the seafile docker image of these -versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 -community edition - 9.0.10 community edition + This will guide you through an interactive setup process: -Versions below 6.0 are not supported. Versions between 6.0 and 6.3 -haven't been tested and might not work properly. +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name +user> you@protonmail.com Password. y) Yes type in my own password g) +Generate random password n) No leave this optional password blank y/g/n> +y Enter the password: password: Confirm the password: password: Option +2fa. 2FA code (if the account requires one) Enter a value. Press Enter +to leave empty. 2fa> 123456 Remote config -------------------- [remote] +type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y -Each new version of rclone is automatically tested against the latest -docker image of the seafile community server. -Standard options + **NOTE:** The Proton Drive encryption keys need to have been already generated + after a regular login via the browser, otherwise attempting to use the + credentials in `rclone` will fail. -Here are the Standard options specific to seafile (seafile). + Once configured you can then use `rclone` like this, ---seafile-url + List directories in top level of your Proton Drive -URL of seafile host to connect to. + rclone lsd remote: -Properties: + List all the files in your Proton Drive -- Config: url -- Env Var: RCLONE_SEAFILE_URL -- Type: string -- Required: true -- Examples: - - "https://cloud.seafile.com/" - - Connect to cloud.seafile.com. + rclone ls remote: ---seafile-user + To copy a local directory to an Proton Drive directory called backup -User name (usually email address). + rclone copy /home/source remote:backup -Properties: + ### Modified time -- Config: user -- Env Var: RCLONE_SEAFILE_USER -- Type: string -- Required: true + Proton Drive Bridge does not support updating modification times yet. ---seafile-pass + ### Restricted filename characters -Password. + Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and + right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) -NB Input to this must be obscured - see rclone obscure. + ### Duplicated files -Properties: + Proton Drive can not have two files with exactly the same name and path. If the + conflict occurs, depending on the advanced config, the file might or might not + be overwritten. -- Config: pass -- Env Var: RCLONE_SEAFILE_PASS -- Type: string -- Required: false + ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) ---seafile-2fa + Please set your mailbox password in the advanced config section. -Two-factor authentication ('true' if the account has 2FA enabled). + ### Caching -Properties: + The cache is currently built for the case when the rclone is the only instance + performing operations to the mount point. The event system, which is the proton + API system that provides visibility of what has changed on the drive, is yet + to be implemented, so updates from other clients won’t be reflected in the + cache. Thus, if there are concurrent clients accessing the same mount point, + then we might have a problem with caching the stale data. -- Config: 2fa -- Env Var: RCLONE_SEAFILE_2FA -- Type: bool -- Default: false ---seafile-library + ### Standard options -Name of the library. + Here are the Standard options specific to protondrive (Proton Drive). -Leave blank to access all non-encrypted libraries. + #### --protondrive-username -Properties: + The username of your proton account -- Config: library -- Env Var: RCLONE_SEAFILE_LIBRARY -- Type: string -- Required: false + Properties: ---seafile-library-key + - Config: username + - Env Var: RCLONE_PROTONDRIVE_USERNAME + - Type: string + - Required: true -Library password (for encrypted libraries only). + #### --protondrive-password -Leave blank if you pass it through the command line. + The password of your proton account. -NB Input to this must be obscured - see rclone obscure. + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Properties: + Properties: -- Config: library_key -- Env Var: RCLONE_SEAFILE_LIBRARY_KEY -- Type: string -- Required: false + - Config: password + - Env Var: RCLONE_PROTONDRIVE_PASSWORD + - Type: string + - Required: true ---seafile-auth-token + #### --protondrive-2fa -Authentication token. + The 2FA code -Properties: + The value can also be provided with --protondrive-2fa=000000 -- Config: auth_token -- Env Var: RCLONE_SEAFILE_AUTH_TOKEN -- Type: string -- Required: false + The 2FA code of your proton drive account if the account is set up with + two-factor authentication -Advanced options + Properties: -Here are the Advanced options specific to seafile (seafile). + - Config: 2fa + - Env Var: RCLONE_PROTONDRIVE_2FA + - Type: string + - Required: false ---seafile-create-library + ### Advanced options -Should rclone create a library if it doesn't exist. + Here are the Advanced options specific to protondrive (Proton Drive). -Properties: + #### --protondrive-mailbox-password -- Config: create_library -- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY -- Type: bool -- Default: false + The mailbox password of your two-password proton account. ---seafile-encoding + For more information regarding the mailbox password, please check the + following official knowledge base article: + https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password -The encoding for the backend. -See the encoding section in the overview for more info. + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). -Properties: + Properties: -- Config: encoding -- Env Var: RCLONE_SEAFILE_ENCODING -- Type: MultiEncoder -- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 + - Config: mailbox_password + - Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD + - Type: string + - Required: false -SFTP + #### --protondrive-client-uid -SFTP is the Secure (or SSH) File Transfer Protocol. + Client uid key (internal use only) -The SFTP backend can be used with a number of different providers: + Properties: -- Hetzner Storage Box -- rsync.net + - Config: client_uid + - Env Var: RCLONE_PROTONDRIVE_CLIENT_UID + - Type: string + - Required: false -SFTP runs over SSH v2 and is installed as standard with most modern SSH -installations. + #### --protondrive-client-access-token -Paths are specified as remote:path. If the path does not begin with a / -it is relative to the home directory of the user. An empty path remote: -refers to the user's home directory. For example, rclone lsd remote: -would list the home directory of the user configured in the rclone -remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would -list the root directory for remote machine (i.e. /) + Client access token key (internal use only) -Note that some SFTP servers will need the leading / - Synology is a good -example of this. rsync.net and Hetzner, on the other hand, requires -users to OMIT the leading /. + Properties: -Note that by default rclone will try to execute shell commands on the -server, see shell access considerations. + - Config: client_access_token + - Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN + - Type: string + - Required: false -Configuration + #### --protondrive-client-refresh-token -Here is an example of making an SFTP configuration. First run + Client refresh token key (internal use only) - rclone config + Properties: -This will guide you through an interactive setup process. + - Config: client_refresh_token + - Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN + - Type: string + - Required: false - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / SSH/SFTP - \ "sftp" - [snip] - Storage> sftp - SSH host to connect to - Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "example.com" - host> example.com - SSH username - Enter a string value. Press Enter for the default ("$USER"). - user> sftpuser - SSH port number - Enter a signed integer. Press Enter for the default (22). - port> - SSH password, leave blank to use ssh-agent. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank - y/g/n> n - Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - key_file> - Remote config - -------------------- - [remote] - host = example.com - user = sftpuser - port = - pass = - key_file = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y + #### --protondrive-client-salted-key-pass -This remote is called remote and can now be used like this: + Client salted key pass key (internal use only) -See all directories in the home directory + Properties: - rclone lsd remote: + - Config: client_salted_key_pass + - Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS + - Type: string + - Required: false -See all directories in the root directory + #### --protondrive-encoding - rclone lsd remote:/ + The encoding for the backend. -Make a new directory + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - rclone mkdir remote:path/to/directory + Properties: -List the contents of a directory + - Config: encoding + - Env Var: RCLONE_PROTONDRIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - rclone ls remote:path/to/directory + #### --protondrive-original-file-size -Sync /home/local/directory to the remote directory, deleting any excess -files in the directory. + Return the file size before encryption + + The size of the encrypted file will be different from (bigger than) the + original file size. Unless there is a reason to return the file size + after encryption is performed, otherwise, set this option to true, as + features like Open() which will need to be supplied with original content + size, will fail to operate properly - rclone sync --interactive /home/local/directory remote:directory + Properties: -Mount the remote path /srv/www-data/ to the local path /mnt/www-data + - Config: original_file_size + - Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE + - Type: bool + - Default: true - rclone mount remote:/srv/www-data/ /mnt/www-data + #### --protondrive-app-version -SSH Authentication + The app version string -The SFTP remote supports three authentication methods: + The app version string indicates the client that is currently performing + the API request. This information is required and will be sent with every + API request. -- Password -- Key file, including certificate signed keys -- ssh-agent + Properties: -Key files should be PEM-encoded private key files. For instance -/home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files -are supported. + - Config: app_version + - Env Var: RCLONE_PROTONDRIVE_APP_VERSION + - Type: string + - Default: "macos-drive@1.0.0-alpha.1+rclone" -The key file can be specified in either an external file (key_file) or -contained within the rclone config file (key_pem). If using key_pem in -the config file, the entry should be on a single line with new line ('' -or '') separating lines. i.e. + #### --protondrive-replace-existing-draft - key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- + Create a new revision when filename conflict is detected -This will generate it correctly for key_pem for use in the config: + When a file upload is cancelled or failed before completion, a draft will be + created and the subsequent upload of the same file to the same location will be + reported as a conflict. - awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa + The value can also be set by --protondrive-replace-existing-draft=true -If you don't specify pass, key_file, or key_pem or ask_password then -rclone will attempt to contact an ssh-agent. You can also specify -key_use_agent to force the usage of an ssh-agent. In this case key_file -or key_pem can also be specified to force the usage of a specific key in -the ssh-agent. - -Using an ssh-agent is the only way to load encrypted OpenSSH keys at the -moment. + If the option is set to true, the draft will be replaced and then the upload + operation will restart. If there are other clients also uploading at the same + file location at the same time, the behavior is currently unknown. Need to set + to true for integration tests. + If the option is set to false, an error "a draft exist - usually this means a + file is being uploaded at another client, or, there was a failed upload attempt" + will be returned, and no upload will happen. -If you set the ask_password option, rclone will prompt for a password -when needed and no password has been configured. + Properties: -Certificate-signed keys + - Config: replace_existing_draft + - Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT + - Type: bool + - Default: false -With traditional key-based authentication, you configure your private -key only, and the public key built into it will be used during the -authentication process. + #### --protondrive-enable-caching -If you have a certificate you may use it to sign your public key, -creating a separate SSH user certificate that should be used instead of -the plain public key extracted from the private key. Then you must -provide the path to the user certificate public key file in pubkey_file. + Caches the files and folders metadata to reduce API calls -Note: This is not the traditional public key paired with your private -key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path -in pubkey_file will not work. + Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, + as the current implementation doesn't update or clear the cache when there are + external changes. -Example: + The files and folders on ProtonDrive are represented as links with keyrings, + which can be cached to improve performance and be friendly to the API server. - [remote] - type = sftp - host = example.com - user = sftpuser - key_file = ~/id_rsa - pubkey_file = ~/id_rsa-cert.pub + The cache is currently built for the case when the rclone is the only instance + performing operations to the mount point. The event system, which is the proton + API system that provides visibility of what has changed on the drive, is yet + to be implemented, so updates from other clients won’t be reflected in the + cache. Thus, if there are concurrent clients accessing the same mount point, + then we might have a problem with caching the stale data. -If you concatenate a cert with a private key then you can specify the -merged file in both places. + Properties: -Note: the cert must come first in the file. e.g. + - Config: enable_caching + - Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING + - Type: bool + - Default: true + + + ## Limitations + + This backend uses the + [Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which + is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a + fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + + There is no official API documentation available from Proton Drive. But, thanks + to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) + and the web, iOS, and Android client codebases, we don't need to completely + reverse engineer the APIs by observing the web client traffic! + + [proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic + building blocks of API calls and error handling, such as 429 exponential + back-off, but it is pretty much just a barebone interface to the Proton API. + For example, the encryption and decryption of the Proton Drive file are not + provided in this library. + + The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on + top of this quickly. This codebase handles the intricate tasks before and after + calling Proton APIs, particularly the complex encryption scheme, allowing + developers to implement features for other software on top of this codebase. + There are likely quite a few errors in this library, as there isn't official + documentation available. + + # put.io + + Paths are specified as `remote:path` + + put.io paths may be as deep as required, e.g. + `remote:directory/subdirectory`. + + ## Configuration + + The initial setup for put.io involves getting a token from put.io + which you need to do in your browser. `rclone config` walks you + through it. + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> putio Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / Put.io + "putio" [snip] Storage> putio ** See help for putio backend at: +https://rclone.org/putio/ ** + +Remote config Use web browser to automatically authenticate rclone with +remote? * Say Y if the machine running rclone has a web browser you can +use * Say N if running rclone on a (remote) machine without web browser +access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If +your browser doesn't open automatically go to the following link: +http://127.0.0.1:53682/auth Log in and authorize rclone for access +Waiting for code... Got code -------------------- [putio] type = putio +token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y Current remotes: + +Name Type ==== ==== putio putio + +e) Edit existing remote +f) New remote +g) Delete remote +h) Rename remote +i) Copy remote +j) Set configuration password +k) Quit config e/n/d/r/c/s/q> q + + + See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a + machine with no Internet browser available. + + Note that rclone runs a webserver on your local machine to collect the + token as returned from put.io if using web browser to automatically + authenticate. This only + runs from the moment it opens your browser to the moment you get back + the verification code. This is on `http://127.0.0.1:53682/` and this + it may require you to unblock it temporarily if you are running a host + firewall, or use manual mode. + + You can then use it like this, + + List directories in top level of your put.io + + rclone lsd remote: + + List all the files in your put.io + + rclone ls remote: + + To copy a local directory to a put.io directory called backup + + rclone copy /home/source remote:backup + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | \ | 0x5C | \ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + + ### Standard options + + Here are the Standard options specific to putio (Put.io). + + #### --putio-client-id + + OAuth Client Id. + + Leave blank normally. + + Properties: + + - Config: client_id + - Env Var: RCLONE_PUTIO_CLIENT_ID + - Type: string + - Required: false + + #### --putio-client-secret + + OAuth Client Secret. + + Leave blank normally. + + Properties: + + - Config: client_secret + - Env Var: RCLONE_PUTIO_CLIENT_SECRET + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to putio (Put.io). + + #### --putio-token + + OAuth Access Token as a JSON blob. + + Properties: + + - Config: token + - Env Var: RCLONE_PUTIO_TOKEN + - Type: string + - Required: false + + #### --putio-auth-url + + Auth server URL. + + Leave blank to use the provider defaults. + + Properties: + + - Config: auth_url + - Env Var: RCLONE_PUTIO_AUTH_URL + - Type: string + - Required: false + + #### --putio-token-url + + Token server url. + + Leave blank to use the provider defaults. + + Properties: + + - Config: token_url + - Env Var: RCLONE_PUTIO_TOKEN_URL + - Type: string + - Required: false + + #### --putio-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_PUTIO_ENCODING + - Type: MultiEncoder + - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + + ## Limitations + + put.io has rate limiting. When you hit a limit, rclone automatically + retries after waiting the amount of time requested by the server. + + If you want to avoid ever hitting these limits, you may use the + `--tpslimit` flag with a low number. Note that the imposed limits + may be different for different operations, and may change over time. + + # Proton Drive + + [Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. + + This is an rclone backend for Proton Drive which supports the file transfer + features of Proton Drive using the same client-side encryption. + + Due to the fact that Proton Drive doesn't publish its API documentation, this + backend is implemented with best efforts by reading the open-sourced client + source code and observing the Proton Drive traffic in the browser. + + **NB** This backend is currently in Beta. It is believed to be correct + and all the integration tests pass. However the Proton Drive protocol + has evolved over time there may be accounts it is not compatible + with. Please [post on the rclone forum](https://forum.rclone.org/) if + you find an incompatibility. + + Paths are specified as `remote:path` + + Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + + ## Configurations + + Here is an example of how to make a remote called `remote`. First run: + + rclone config + + This will guide you through an interactive setup process: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name +user> you@protonmail.com Password. y) Yes type in my own password g) +Generate random password n) No leave this optional password blank y/g/n> +y Enter the password: password: Confirm the password: password: Option +2fa. 2FA code (if the account requires one) Enter a value. Press Enter +to leave empty. 2fa> 123456 Remote config -------------------- [remote] +type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + **NOTE:** The Proton Drive encryption keys need to have been already generated + after a regular login via the browser, otherwise attempting to use the + credentials in `rclone` will fail. + + Once configured you can then use `rclone` like this, + + List directories in top level of your Proton Drive + + rclone lsd remote: + + List all the files in your Proton Drive + + rclone ls remote: + + To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + + ### Modified time + + Proton Drive Bridge does not support updating modification times yet. + + ### Restricted filename characters + + Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and + right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) + + ### Duplicated files + + Proton Drive can not have two files with exactly the same name and path. If the + conflict occurs, depending on the advanced config, the file might or might not + be overwritten. + + ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) + + Please set your mailbox password in the advanced config section. + + ### Caching + + The cache is currently built for the case when the rclone is the only instance + performing operations to the mount point. The event system, which is the proton + API system that provides visibility of what has changed on the drive, is yet + to be implemented, so updates from other clients won’t be reflected in the + cache. Thus, if there are concurrent clients accessing the same mount point, + then we might have a problem with caching the stale data. + + + ### Standard options + + Here are the Standard options specific to protondrive (Proton Drive). + + #### --protondrive-username + + The username of your proton account + + Properties: + + - Config: username + - Env Var: RCLONE_PROTONDRIVE_USERNAME + - Type: string + - Required: true + + #### --protondrive-password + + The password of your proton account. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: password + - Env Var: RCLONE_PROTONDRIVE_PASSWORD + - Type: string + - Required: true + + #### --protondrive-2fa + + The 2FA code + + The value can also be provided with --protondrive-2fa=000000 + + The 2FA code of your proton drive account if the account is set up with + two-factor authentication + + Properties: + + - Config: 2fa + - Env Var: RCLONE_PROTONDRIVE_2FA + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to protondrive (Proton Drive). + + #### --protondrive-mailbox-password + + The mailbox password of your two-password proton account. + + For more information regarding the mailbox password, please check the + following official knowledge base article: + https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: mailbox_password + - Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD + - Type: string + - Required: false + + #### --protondrive-client-uid + + Client uid key (internal use only) + + Properties: + + - Config: client_uid + - Env Var: RCLONE_PROTONDRIVE_CLIENT_UID + - Type: string + - Required: false + + #### --protondrive-client-access-token + + Client access token key (internal use only) + + Properties: + + - Config: client_access_token + - Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN + - Type: string + - Required: false + + #### --protondrive-client-refresh-token + + Client refresh token key (internal use only) + + Properties: + + - Config: client_refresh_token + - Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN + - Type: string + - Required: false + + #### --protondrive-client-salted-key-pass + + Client salted key pass key (internal use only) + + Properties: + + - Config: client_salted_key_pass + - Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS + - Type: string + - Required: false + + #### --protondrive-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_PROTONDRIVE_ENCODING + - Type: MultiEncoder + - Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + + #### --protondrive-original-file-size + + Return the file size before encryption + + The size of the encrypted file will be different from (bigger than) the + original file size. Unless there is a reason to return the file size + after encryption is performed, otherwise, set this option to true, as + features like Open() which will need to be supplied with original content + size, will fail to operate properly + + Properties: + + - Config: original_file_size + - Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE + - Type: bool + - Default: true + + #### --protondrive-app-version + + The app version string + + The app version string indicates the client that is currently performing + the API request. This information is required and will be sent with every + API request. + + Properties: + + - Config: app_version + - Env Var: RCLONE_PROTONDRIVE_APP_VERSION + - Type: string + - Default: "macos-drive@1.0.0-alpha.1+rclone" + + #### --protondrive-replace-existing-draft + + Create a new revision when filename conflict is detected + + When a file upload is cancelled or failed before completion, a draft will be + created and the subsequent upload of the same file to the same location will be + reported as a conflict. + + The value can also be set by --protondrive-replace-existing-draft=true + + If the option is set to true, the draft will be replaced and then the upload + operation will restart. If there are other clients also uploading at the same + file location at the same time, the behavior is currently unknown. Need to set + to true for integration tests. + If the option is set to false, an error "a draft exist - usually this means a + file is being uploaded at another client, or, there was a failed upload attempt" + will be returned, and no upload will happen. + + Properties: + + - Config: replace_existing_draft + - Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT + - Type: bool + - Default: false + + #### --protondrive-enable-caching + + Caches the files and folders metadata to reduce API calls + + Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, + as the current implementation doesn't update or clear the cache when there are + external changes. + + The files and folders on ProtonDrive are represented as links with keyrings, + which can be cached to improve performance and be friendly to the API server. + + The cache is currently built for the case when the rclone is the only instance + performing operations to the mount point. The event system, which is the proton + API system that provides visibility of what has changed on the drive, is yet + to be implemented, so updates from other clients won’t be reflected in the + cache. Thus, if there are concurrent clients accessing the same mount point, + then we might have a problem with caching the stale data. + + Properties: + + - Config: enable_caching + - Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING + - Type: bool + - Default: true + + + + ## Limitations + + This backend uses the + [Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which + is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a + fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + + There is no official API documentation available from Proton Drive. But, thanks + to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) + and the web, iOS, and Android client codebases, we don't need to completely + reverse engineer the APIs by observing the web client traffic! + + [proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic + building blocks of API calls and error handling, such as 429 exponential + back-off, but it is pretty much just a barebone interface to the Proton API. + For example, the encryption and decryption of the Proton Drive file are not + provided in this library. + + The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on + top of this quickly. This codebase handles the intricate tasks before and after + calling Proton APIs, particularly the complex encryption scheme, allowing + developers to implement features for other software on top of this codebase. + There are likely quite a few errors in this library, as there isn't official + documentation available. + + # Seafile + + This is a backend for the [Seafile](https://www.seafile.com/) storage service: + - It works with both the free community edition or the professional edition. + - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. + - Encrypted libraries are also supported. + - It supports 2FA enabled users + - Using a Library API Token is **not** supported + + ## Configuration + + There are two distinct modes you can setup your remote: + - you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: + Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`. + - you point your remote to a specific library during the configuration: + Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) + + ### Configuration in root mode + + Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run + + rclone config + + This will guide you through an interactive setup process. To authenticate + you will need the URL of your server, your email (or username) and your password. + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> seafile Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / +Seafile  "seafile" [snip] Storage> seafile ** See help for seafile +backend at: https://rclone.org/seafile/ ** + +URL of seafile host to connect to Enter a string value. Press Enter for +the default (""). Choose a number from below, or type in your own value +1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> +http://my.seafile.server/ User name (usually email address) Enter a +string value. Press Enter for the default (""). user> me@example.com +Password y) Yes type in my own password g) Generate random password n) +No leave this optional password blank (default) y/g> y Enter the +password: password: Confirm the password: password: Two-factor +authentication ('true' if the account has 2FA enabled) Enter a boolean +value (true or false). Press Enter for the default ("false"). 2fa> false +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (""). library> Library +password (for encrypted libraries only). Leave blank if you pass it +through the command line. y) Yes type in my own password g) Generate +random password n) No leave this optional password blank (default) +y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n +Remote config Two-factor authentication is not enabled on this account. +-------------------- [seafile] type = seafile url = +http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** +2fa = false -------------------- y) Yes this is OK (default) e) Edit +this remote d) Delete this remote y/e/d> y + + + This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this: + + See all libraries + + rclone lsd seafile: + + Create a new library + + rclone mkdir seafile:library + + List the contents of a library + + rclone ls seafile:library + + Sync `/home/local/directory` to the remote library, deleting any + excess files in the library. + + rclone sync --interactive /home/local/directory seafile:library + + ### Configuration in library mode + + Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> seafile Type of storage to +configure. Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value [snip] XX / +Seafile  "seafile" [snip] Storage> seafile ** See help for seafile +backend at: https://rclone.org/seafile/ ** + +URL of seafile host to connect to Enter a string value. Press Enter for +the default (""). Choose a number from below, or type in your own value +1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> +http://my.seafile.server/ User name (usually email address) Enter a +string value. Press Enter for the default (""). user> me@example.com +Password y) Yes type in my own password g) Generate random password n) +No leave this optional password blank (default) y/g> y Enter the +password: password: Confirm the password: password: Two-factor +authentication ('true' if the account has 2FA enabled) Enter a boolean +value (true or false). Press Enter for the default ("false"). 2fa> true +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (""). library> My +Library Library password (for encrypted libraries only). Leave blank if +you pass it through the command line. y) Yes type in my own password g) +Generate random password n) No leave this optional password blank +(default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) +y/n> n Remote config Two-factor authentication: please enter your 2FA +code 2fa code> 123456 Authenticating... Success! -------------------- +[seafile] type = seafile url = http://my.seafile.server/ user = +me@example.com pass = 2fa = true library = My Library +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y + + + You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. + + You specified `My Library` during the configuration. The root of the remote is pointing at the + root of the library `My Library`: + + See all files in the library: + + rclone lsd seafile: + + Create a new directory inside the library + + rclone mkdir seafile:directory + + List the contents of a directory + + rclone ls seafile:directory + + Sync `/home/local/directory` to the remote library, deleting any + excess files in the library. + + rclone sync --interactive /home/local/directory seafile: + + + ### --fast-list + + Seafile version 7+ supports `--fast-list` which allows you to use fewer + transactions in exchange for more memory. See the [rclone + docs](https://rclone.org/docs/#fast-list) for more details. + Please note this is not supported on seafile server version 6.x + + + ### Restricted filename characters + + In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) + the following characters are also replaced: + + | Character | Value | Replacement | + | --------- |:-----:|:-----------:| + | / | 0x2F | / | + | " | 0x22 | " | + | \ | 0x5C | \ | + + Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), + as they can't be used in JSON strings. + + ### Seafile and rclone link + + Rclone supports generating share links for non-encrypted libraries only. + They can either be for a file or a directory: + +rclone link seafile:seafile-tutorial.doc +http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ + + + or if run on a directory you will get: + +rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ + + + Please note a share link is unique for each file or directory. If you run a link command on a file/dir + that has already been shared, you will get the exact same link. + + ### Compatibility + + It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: + - 6.3.4 community edition + - 7.0.5 community edition + - 7.1.3 community edition + - 9.0.10 community edition + + Versions below 6.0 are not supported. + Versions between 6.0 and 6.3 haven't been tested and might not work properly. + + Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. + + + ### Standard options + + Here are the Standard options specific to seafile (seafile). + + #### --seafile-url + + URL of seafile host to connect to. + + Properties: + + - Config: url + - Env Var: RCLONE_SEAFILE_URL + - Type: string + - Required: true + - Examples: + - "https://cloud.seafile.com/" + - Connect to cloud.seafile.com. + + #### --seafile-user + + User name (usually email address). + + Properties: + + - Config: user + - Env Var: RCLONE_SEAFILE_USER + - Type: string + - Required: true + + #### --seafile-pass + + Password. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: pass + - Env Var: RCLONE_SEAFILE_PASS + - Type: string + - Required: false + + #### --seafile-2fa + + Two-factor authentication ('true' if the account has 2FA enabled). + + Properties: + + - Config: 2fa + - Env Var: RCLONE_SEAFILE_2FA + - Type: bool + - Default: false + + #### --seafile-library + + Name of the library. + + Leave blank to access all non-encrypted libraries. + + Properties: + + - Config: library + - Env Var: RCLONE_SEAFILE_LIBRARY + - Type: string + - Required: false + + #### --seafile-library-key + + Library password (for encrypted libraries only). + + Leave blank if you pass it through the command line. + + **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + + Properties: + + - Config: library_key + - Env Var: RCLONE_SEAFILE_LIBRARY_KEY + - Type: string + - Required: false + + #### --seafile-auth-token + + Authentication token. + + Properties: + + - Config: auth_token + - Env Var: RCLONE_SEAFILE_AUTH_TOKEN + - Type: string + - Required: false + + ### Advanced options + + Here are the Advanced options specific to seafile (seafile). + + #### --seafile-create-library + + Should rclone create a library if it doesn't exist. + + Properties: + + - Config: create_library + - Env Var: RCLONE_SEAFILE_CREATE_LIBRARY + - Type: bool + - Default: false + + #### --seafile-encoding + + The encoding for the backend. + + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + + Properties: + + - Config: encoding + - Env Var: RCLONE_SEAFILE_ENCODING + - Type: MultiEncoder + - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 + + + + # SFTP + + SFTP is the [Secure (or SSH) File Transfer + Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). + + The SFTP backend can be used with a number of different providers: + + + - Hetzner Storage Box + - rsync.net + + + SFTP runs over SSH v2 and is installed as standard with most modern + SSH installations. + + Paths are specified as `remote:path`. If the path does not begin with + a `/` it is relative to the home directory of the user. An empty path + `remote:` refers to the user's home directory. For example, `rclone lsd remote:` + would list the home directory of the user configured in the rclone remote config + (`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root + directory for remote machine (i.e. `/`) + + Note that some SFTP servers will need the leading / - Synology is a + good example of this. rsync.net and Hetzner, on the other hand, requires users to + OMIT the leading /. + + Note that by default rclone will try to execute shell commands on + the server, see [shell access considerations](#shell-access-considerations). + + ## Configuration + + Here is an example of making an SFTP configuration. First run + + rclone config + + This will guide you through an interactive setup process. + +No remotes found, make a new one? n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. Choose a number from below, or type in your own value [snip] +XX / SSH/SFTP  "sftp" [snip] Storage> sftp SSH host to connect to Choose +a number from below, or type in your own value 1 / Connect to +example.com  "example.com" host> example.com SSH username Enter a string +value. Press Enter for the default ("$USER"). user> sftpuser SSH port +number Enter a signed integer. Press Enter for the default (22). port> +SSH password, leave blank to use ssh-agent. y) Yes type in my own +password g) Generate random password n) No leave this optional password +blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave +blank to use ssh-agent. key_file> Remote config -------------------- +[remote] host = example.com user = sftpuser port = pass = key_file = +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y + + + This remote is called `remote` and can now be used like this: + + See all directories in the home directory + + rclone lsd remote: + + See all directories in the root directory + + rclone lsd remote:/ + + Make a new directory + + rclone mkdir remote:path/to/directory + + List the contents of a directory + + rclone ls remote:path/to/directory + + Sync `/home/local/directory` to the remote directory, deleting any + excess files in the directory. + + rclone sync --interactive /home/local/directory remote:directory + + Mount the remote path `/srv/www-data/` to the local path + `/mnt/www-data` + + rclone mount remote:/srv/www-data/ /mnt/www-data + + ### SSH Authentication + + The SFTP remote supports three authentication methods: + + * Password + * Key file, including certificate signed keys + * ssh-agent + + Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. + Only unencrypted OpenSSH or PEM encrypted files are supported. + + The key file can be specified in either an external file (key_file) or contained within the + rclone config file (key_pem). If using key_pem in the config file, the entry should be on a + single line with new line ('\n' or '\r\n') separating lines. i.e. + + key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- + + This will generate it correctly for key_pem for use in the config: + + awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa + + If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then + rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent` + to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can + also be specified to force the usage of a specific key in the ssh-agent. + + Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. + + If you set the `ask_password` option, rclone will prompt for a password when + needed and no password has been configured. + + #### Certificate-signed keys + + With traditional key-based authentication, you configure your private key only, + and the public key built into it will be used during the authentication process. + + If you have a certificate you may use it to sign your public key, creating a + separate SSH user certificate that should be used instead of the plain public key + extracted from the private key. Then you must provide the path to the + user certificate public key file in `pubkey_file`. + + Note: This is not the traditional public key paired with your private key, + typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in + `pubkey_file` will not work. + + Example: + +[remote] type = sftp host = example.com user = sftpuser key_file = +~/id_rsa pubkey_file = ~/id_rsa-cert.pub + + + If you concatenate a cert with a private key then you can specify the + merged file in both places. + + Note: the cert must come first in the file. e.g. + + ``` cat id_rsa-cert.pub id_rsa > merged_key + ``` -Host key validation + ### Host key validation -By default rclone will not check the server's host key for validation. -This can allow an attacker to replace a server with their own and if you -use password authentication then this can lead to that password being -exposed. + By default rclone will not check the server's host key for validation. This + can allow an attacker to replace a server with their own and if you use + password authentication then this can lead to that password being exposed. -Host key matching, using standard known_hosts files can be turned on by -enabling the known_hosts_file option. This can point to the file -maintained by OpenSSH or can point to a unique file. + Host key matching, using standard `known_hosts` files can be turned on by + enabling the `known_hosts_file` option. This can point to the file maintained + by `OpenSSH` or can point to a unique file. -e.g. using the OpenSSH known_hosts file: + e.g. using the OpenSSH `known_hosts` file: + ``` [remote] type = sftp host = example.com @@ -39704,6 +41975,41 @@ Properties: - Type: bool - Default: false +--sftp-ssh + +Path and arguments to external ssh binary. + +Normally rclone will use its internal ssh library to connect to the SFTP +server. However it does not implement all possible ssh options so it may +be desirable to use an external ssh binary. + +Rclone ignores all the internal config if you use this option and +expects you to configure the ssh binary with the user/host/port and any +other options you need. + +Important The ssh command must log in without asking for a password so +needs to be configured with keys or certificates. + +Rclone will run the command supplied either with the additional +arguments "-s sftp" to access the SFTP subsystem or with commands such +as "md5sum /path/to/file" appended to read checksums. + +Any arguments with spaces in should be surrounded by "double quotes". + +An example setting might be: + + ssh -o ServerAliveInterval=20 user@example.com + +Note that when using an external ssh binary rclone makes a new ssh +connection for every hash it calculates. + +Properties: + +- Config: ssh +- Env Var: RCLONE_SFTP_SSH +- Type: SpaceSepList +- Default: + Advanced options Here are the Advanced options specific to sftp (SSH/SFTP). @@ -39756,6 +42062,22 @@ E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory +To specify only the path to the SFTP remote's root, and allow rclone to +add any relative subpaths automatically (including unwrapping/decrypting +remotes as necessary), add the '@' character to the beginning of the +path. + +E.g. the first example above could be rewritten as: + + rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2 + +Note that when using this method with Synology "home" folders, the full +"/homes/USER" path should be specified instead of "/home". + +E.g. the second example above should be rewritten as: + + rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1 + Properties: - Config: path_override @@ -39850,6 +42172,15 @@ Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. +If adding server_command to the configuration file please note that it +should not be enclosed in quotes, since that will make rclone fail. + +A working example is: + + [remote_name] + type = sftp + server_command = sudo /usr/libexec/openssh/sftp-server + Properties: - Config: server_command @@ -40083,6 +42414,23 @@ Properties: - Type: SpaceSepList - Default: +--sftp-socks-proxy + +Socks 5 proxy host. + +Supports the format user:pass@host:port, user@host:port, host:port. + +Example: + + myUser:myPass@localhost:9005 + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_SFTP_SOCKS_PROXY +- Type: string +- Required: false + Limitations On some SFTP servers (e.g. Synology) the paths are different for SSH and @@ -41221,27 +43569,29 @@ can however been seen in the uptobox web interface. Union -The union remote provides a unification similar to UnionFS using other -remotes. - -Paths may be as deep as required or a local path, e.g. -remote:directory/subdirectory or /directory/subdirectory. +The union backend joins several remotes together to make a single +unified view of them. During the initial setup with rclone config you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. -Attribute :ro and :nc can be attach to the end of path to tag the remote -as read only or no create, e.g. remote:directory/subdirectory:ro or -remote:directory/subdirectory:nc. +The attributes :ro, :nc and :nc can be attached to the end of the remote +to tag the remote as read only, no create or writeback, e.g. +remote:directory/subdirectory:ro or remote:directory/subdirectory:nc. + +- :ro means files will only be read from here and never written +- :nc means new files or directories won't be created here +- :writeback means files found in different remotes will be written + back here. See the writeback section for more info. Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop. -There will be no special handling of paths containing .. segments. -Invoking rclone mkdir backup:../desktop is exactly the same as invoking +There is no special handling of paths containing .. segments. Invoking +rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop. Configuration @@ -41463,6 +43813,34 @@ much larger latency of remote file systems. upstream. ----------------------------------------------------------------------- +Writeback + +The tag :writeback on an upstream remote can be used to make a simple +cache system like this: + + [union] + type = union + action_policy = all + create_policy = all + search_policy = ff + upstreams = /local:writeback remote:dir + +When files are opened for read, if the file is in remote:dir but not +/local then rclone will copy the file entirely into /local before +returning a reference to the file in /local. The copy will be done with +the equivalent of rclone copy so will use --multi-thread-streams if +configured. Any copies will be logged with an INFO log. + +When files are written, they will be written to both remote:dir and +/local. + +As many remotes as desired can be added to upstreams but there should +only be one :writeback tag. + +Rclone does not manage the :writeback remote in any way other than +writing files back to it. So if you need to expire old files or manage +the size then you will have to do this yourself. + Standard options Here are the Standard options specific to union (Union merges the @@ -43068,6 +45446,219 @@ Options: Changelog +v1.64.0 - 2023-09-11 + +See commits + +- New backends + - Proton Drive (Chun-Hung Tseng) + - Quatrix (Oksana, Volodymyr Kit) + - New S3 providers + - Synology C2 (BakaWang) + - Leviia (Benjamin) + - New Jottacloud providers + - Onlime (Fjodor42) + - Telia Sky (NoLooseEnds) +- Major changes + - Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj + Ghosh, Edwin Mackenzie-Owen) + - Multi-thread transfers are now available when transferring + to: + - local, s3, azureblob, b2, oracleobjectstorage and smb + - This greatly improves transfer speed between two network + sources. + - In memory buffering has been unified between all backends + and should share memory better. + - See --multi-thread docs for more info +- New commands + - rclone config redacted support mechanism for showing redacted + config (Nick Craig-Wood) +- New Features + - accounting + - Show server side stats in own lines and not as bytes + transferred (Nick Craig-Wood) + - bisync + - Add new --ignore-listing-checksum flag to distinguish from + --ignore-checksum (nielash) + - Add experimental --resilient mode to allow recovery from + self-correctable errors (nielash) + - Add support for --create-empty-src-dirs (nielash) + - Dry runs no longer commit filter changes (nielash) + - Enforce --check-access during --resync (nielash) + - Apply filters correctly during deletes (nielash) + - Equality check before renaming (leave identical files alone) + (nielash) + - Fix dryRun rc parameter being ignored (nielash) + - build + - Update to go1.21 and make go1.19 the minimum required + version (Anagh Kumar Baranwal, Nick Craig-Wood) + - Update dependencies (Nick Craig-Wood) + - Add snap installation (hideo aoyama) + - Change Winget Releaser job to ubuntu-latest (sitiom) + - cmd: Refactor and use sysdnotify in more commands (eNV25) + - config: Add --multi-thread-chunk-size flag (Vitor Gomes) + - doc updates (antoinetran, Benjamin, Bjørn Smith, Dean Attali, + gabriel-suela, James Braza, Justin Hellings, kapitainsky, Mahad, + Masamune3210, Nick Craig-Wood, Nihaal Sangha, Niklas Hambüchen, + Raymond Berger, r-ricci, Sawada Tsunayoshi, Tiago Boeing, + Vladislav Vorobev) + - fs + - Use atomic types everywhere (Roberto Ricci) + - When --max-transfer limit is reached exit with code (10) + (kapitainsky) + - Add rclone completion powershell - basic implementation only + (Nick Craig-Wood) + - http servers: Allow CORS to be set with --allow-origin flag + (yuudi) + - lib/rest: Remove unnecessary nil check (Eng Zer Jun) + - ncdu: Add keybinding to rescan filesystem (eNV25) + - rc + - Add executeId to job listings (yuudi) + - Add core/du to measure local disk usage (Nick Craig-Wood) + - Add operations/settier to API (Drew Stinnett) + - rclone test info: Add --check-base32768 flag to check can store + all base32768 characters (Nick Craig-Wood) + - rmdirs: Remove directories concurrently controlled by --checkers + (Nick Craig-Wood) +- Bug Fixes + - accounting: Don't stop calculating average transfer speed until + the operation is complete (Jacob Hands) + - fs: Fix transferTime not being set in JSON logs (Jacob Hands) + - fshttp: Fix --bind 0.0.0.0 allowing IPv6 and --bind ::0 allowing + IPv4 (Nick Craig-Wood) + - operations: Fix overlapping check on case insensitive file + systems (Nick Craig-Wood) + - serve dlna: Fix MIME type if backend can't identify it (Nick + Craig-Wood) + - serve ftp: Fix race condition when using the auth proxy (Nick + Craig-Wood) + - serve sftp: Fix hash calculations with --vfs-cache-mode full + (Nick Craig-Wood) + - serve webdav: Fix error: Expecting fs.Object or fs.Directory, + got nil (Nick Craig-Wood) + - sync: Fix lockup with --cutoff-mode=soft and --max-duration + (Nick Craig-Wood) +- Mount + - fix: Mount parsing for linux (Anagh Kumar Baranwal) +- VFS + - Add --vfs-cache-min-free-space to control minimum free space on + the disk containing the cache (Nick Craig-Wood) + - Added cache cleaner for directories to reduce memory usage + (Anagh Kumar Baranwal) + - Update parent directory modtimes on vfs actions (David Pedersen) + - Keep virtual directory status accurate and reduce deadlock + potential (Anagh Kumar Baranwal) + - Make sure struct field is aligned for atomic access (Roberto + Ricci) +- Local + - Rmdir return an error if the path is not a dir (zjx20) +- Azure Blob + - Implement OpenChunkWriter and multi-thread uploads (Nick + Craig-Wood) + - Fix creation of directory markers (Nick Craig-Wood) + - Fix purging with directory markers (Nick Craig-Wood) +- B2 + - Implement OpenChunkWriter and multi-thread uploads (Nick + Craig-Wood) + - Fix rclone link when object path contains special characters + (Alishan Ladhani) +- Box + - Add polling support (David Sze) + - Add --box-impersonate to impersonate a user ID (Nick Craig-Wood) + - Fix unhelpful decoding of error messages into decimal numbers + (Nick Craig-Wood) +- Chunker + - Update documentation to mention issue with small files (Ricardo + D'O. Albanus) +- Compress + - Fix ChangeNotify (Nick Craig-Wood) +- Drive + - Add --drive-fast-list-bug-fix to control ListR bug workaround + (Nick Craig-Wood) +- Fichier + - Implement DirMove (Nick Craig-Wood) + - Fix error code parsing (alexia) +- FTP + - Add socks_proxy support for SOCKS5 proxies (Zach) + - Fix 425 "TLS session of data connection not resumed" errors + (Nick Craig-Wood) +- Hdfs + - Retry "replication in progress" errors when uploading (Nick + Craig-Wood) + - Fix uploading to the wrong object on Update with overriden + remote name (Nick Craig-Wood) +- HTTP + - CORS should not be sent if not set (yuudi) + - Fix webdav OPTIONS response (yuudi) +- Opendrive + - Fix List on a just deleted and remade directory (Nick + Craig-Wood) +- Oracleobjectstorage + - Use rclone's rate limiter in mutipart transfers (Manoj Ghosh) + - Implement OpenChunkWriter and multi-thread uploads (Manoj Ghosh) +- S3 + - Refactor multipart upload to use OpenChunkWriter and ChunkWriter + (Vitor Gomes) + - Factor generic multipart upload into lib/multipart (Nick + Craig-Wood) + - Fix purging of root directory with --s3-directory-markers (Nick + Craig-Wood) + - Add rclone backend set command to update the running config + (Nick Craig-Wood) + - Add rclone backend restore-status command (Nick Craig-Wood) +- SFTP + - Stop uploads re-using the same ssh connection to improve + performance (Nick Craig-Wood) + - Add --sftp-ssh to specify an external ssh binary to use (Nick + Craig-Wood) + - Add socks_proxy support for SOCKS5 proxies (Zach) + - Support dynamic --sftp-path-override (nielash) + - Fix spurious warning when using --sftp-ssh (Nick Craig-Wood) +- Smb + - Implement multi-threaded writes for copies to smb (Edwin + Mackenzie-Owen) +- Storj + - Performance improvement for large file uploads (Kaloyan Raev) +- Swift + - Fix HEADing 0-length objects when --swift-no-large-objects set + (Julian Lepinski) +- Union + - Add :writback to act as a simple cache (Nick Craig-Wood) +- WebDAV + - Nextcloud: fix segment violation in low-level retry (Paul) +- Zoho + - Remove Range requests workarounds to fix integration tests (Nick + Craig-Wood) + +v1.63.1 - 2023-07-17 + +See commits + +- Bug Fixes + - build: Fix macos builds for versions < 12 (Anagh Kumar Baranwal) + - dirtree: Fix performance with large directories of directories + and --fast-list (Nick Craig-Wood) + - operations + - Fix deadlock when using lsd/ls with --progress (Nick + Craig-Wood) + - Fix .rclonelink files not being converted back to symlinks + (Nick Craig-Wood) + - doc fixes (Dean Attali, Mahad, Nick Craig-Wood, Sawada + Tsunayoshi, Vladislav Vorobev) +- Local + - Fix partial directory read for corrupted filesystem (Nick + Craig-Wood) +- Box + - Fix reconnect failing with HTTP 400 Bad Request (albertony) +- Smb + - Fix "Statfs failed: bucket or container name is needed" when + mounting (Nick Craig-Wood) +- WebDAV + - Nextcloud: fix must use /dav/files/USER endpoint not /webdav + error (Paul) + - Nextcloud chunking: add more guidance for the user to check the + config (darix) + v1.63.0 - 2023-06-30 See commits @@ -49952,7 +52543,6 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Chris Nelson stuff@cjnaz.com - Felix Bünemann felix.buenemann@gmail.com - Atílio Antônio atiliodadalto@hotmail.com -- Roberto Ricci ricci@disroot.org - Carlo Mion mion00@gmail.com - Chris Lu chris.lu@gmail.com - Vitor Arruda vitor.pimenta.arruda@gmail.com @@ -50151,6 +52741,42 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Peter Fern github@0xc0dedbad.com - zzq i@zhangzqs.cn - mac-15 usman.ilamdin@phpstudios.com +- Sawada Tsunayoshi 34431649+TsunayoshiSawada@users.noreply.github.com +- Dean Attali daattali@gmail.com +- Fjodor42 molgaard@gmail.com +- BakaWang wa11579@hotmail.com +- Mahad 56235065+Mahad-lab@users.noreply.github.com +- Vladislav Vorobev x.miere@gmail.com +- darix darix@users.noreply.github.com +- Benjamin 36415086+bbenjamin-sys@users.noreply.github.com +- Chun-Hung Tseng henrybear327@users.noreply.github.com +- Ricardo D'O. Albanus rdalbanus@users.noreply.github.com +- gabriel-suela gscsuela@gmail.com +- Tiago Boeing contato@tiagoboeing.com +- Edwin Mackenzie-Owen edwin.mowen@gmail.com +- Niklas Hambüchen mail@nh2.me +- yuudi yuudi@users.noreply.github.com +- Zach github@prozach.org +- nielash 31582349+nielash@users.noreply.github.com +- Julian Lepinski lepinsk@users.noreply.github.com +- Raymond Berger RayBB@users.noreply.github.com +- Nihaal Sangha nihaal.git@gmail.com +- Masamune3210 1053504+Masamune3210@users.noreply.github.com +- James Braza jamesbraza@gmail.com +- antoinetran antoinetran@users.noreply.github.com +- alexia me@alexia.lol +- nielash nielronash@gmail.com +- Vitor Gomes vitor.gomes@delivion.de mail@vitorgomes.com +- Jacob Hands jacob@gogit.io +- hideo aoyama 100831251+boukendesho@users.noreply.github.com +- Roberto Ricci io@r-ricci.it +- Bjørn Smith bjornsmith@gmail.com +- Alishan Ladhani 8869764+aladh@users.noreply.github.com +- zjx20 zhoujianxiong2@gmail.com +- Oksana 142890647+oks-maytech@users.noreply.github.com +- Volodymyr Kit v.kit@maytech.net +- David Pedersen limero@me.com +- Drew Stinnett drew@drewlink.com Contact the rclone project @@ -50160,6 +52786,13 @@ Forum for questions and general discussion: - https://forum.rclone.org +Business support + +For business support or sponsorship enquiries please see: + +- https://rclone.com/ +- sponsorship@rclone.com + GitHub repository The project's repository is located at: @@ -50170,12 +52803,16 @@ There you can file bug reports or contribute with pull requests. Twitter -You can also follow me on twitter for rclone announcements: +You can also follow Nick on twitter for rclone announcements: - [@njcw](https://twitter.com/njcw) Email Or if all else fails or you want to ask something private or -confidential email Nick Craig-Wood. Please don't email me requests for -help - those are better directed to the forum. Thanks! +confidential + +- info@rclone.com + +Please don't email requests for help to this address - those are better +directed to the forum unless you'd like to sign up for business support. diff --git a/bin/make_manual.py b/bin/make_manual.py index 87b36e40e..19bb88cc0 100755 --- a/bin/make_manual.py +++ b/bin/make_manual.py @@ -25,6 +25,7 @@ docs = [ "flags.md", "docker.md", "bisync.md", + "release_signing.md", # Keep these alphabetical by full name "fichier.md", diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index bf545a9cc..f033c24f4 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -737,10 +737,7 @@ Properties: #### --azureblob-memory-pool-flush-time -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -751,7 +748,7 @@ Properties: #### --azureblob-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: diff --git a/docs/content/b2.md b/docs/content/b2.md index 9d2bd5acb..c2a8d2e4d 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -492,6 +492,24 @@ Properties: - Type: SizeSuffix - Default: 96Mi +#### --b2-upload-concurrency + +Concurrency for multipart uploads. + +This is the number of chunks of the same file that are uploaded +concurrently. + +Note that chunks are stored in memory and there may be up to +"--transfers" * "--b2-upload-concurrency" chunks stored at once +in memory. + +Properties: + +- Config: upload_concurrency +- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY +- Type: int +- Default: 16 + #### --b2-disable-checksum Disable checksums for large (> upload cutoff) files. @@ -550,9 +568,7 @@ Properties: #### --b2-memory-pool-flush-time -How often internal memory buffer pools will be flushed. -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -563,7 +579,7 @@ Properties: #### --b2-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: diff --git a/docs/content/box.md b/docs/content/box.md index 3133aa19c..8c4123d14 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -438,6 +438,28 @@ Properties: - Type: string - Required: false +#### --box-impersonate + +Impersonate this user ID when using a service account. + +Settng this flag allows rclone, when using a JWT service account, to +act on behalf of another user by setting the as-user header. + +The user ID is the Box identifier for a user. User IDs can found for +any user via the GET /users endpoint, which is only available to +admins, or by calling the GET /users/me endpoint with an authenticated +user session. + +See: https://developer.box.com/guides/authentication/jwt/as-user/ + + +Properties: + +- Config: impersonate +- Env Var: RCLONE_BOX_IMPERSONATE +- Type: string +- Required: false + #### --box-encoding The encoding for the backend. diff --git a/docs/content/changelog.md b/docs/content/changelog.md index a55fc1fbf..8e23f3412 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -5,6 +5,140 @@ description: "Rclone Changelog" # Changelog +## v1.64.0 - 2023-09-11 + +[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0) + +* New backends + * [Proton Drive](/protondrive/) (Chun-Hung Tseng) + * [Quatrix](/quatrix/) (Oksana, Volodymyr Kit) + * New S3 providers + * [Synology C2](/s3/#synology-c2) (BakaWang) + * [Leviia](/s3/#leviia) (Benjamin) + * New Jottacloud providers + * [Onlime](/jottacloud/) (Fjodor42) + * [Telia Sky](/jottacloud/) (NoLooseEnds) +* Major changes + * Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj Ghosh, Edwin Mackenzie-Owen) + * Multi-thread transfers are now available when transferring to: + * `local`, `s3`, `azureblob`, `b2`, `oracleobjectstorage` and `smb` + * This greatly improves transfer speed between two network sources. + * In memory buffering has been unified between all backends and should share memory better. + * See [--multi-thread docs](/docs/#multi-thread-cutoff) for more info +* New commands + * `rclone config redacted` support mechanism for showing redacted config (Nick Craig-Wood) +* New Features + * accounting + * Show server side stats in own lines and not as bytes transferred (Nick Craig-Wood) + * bisync + * Add new `--ignore-listing-checksum` flag to distinguish from `--ignore-checksum` (nielash) + * Add experimental `--resilient` mode to allow recovery from self-correctable errors (nielash) + * Add support for `--create-empty-src-dirs` (nielash) + * Dry runs no longer commit filter changes (nielash) + * Enforce `--check-access` during `--resync` (nielash) + * Apply filters correctly during deletes (nielash) + * Equality check before renaming (leave identical files alone) (nielash) + * Fix `dryRun` rc parameter being ignored (nielash) + * build + * Update to `go1.21` and make `go1.19` the minimum required version (Anagh Kumar Baranwal, Nick Craig-Wood) + * Update dependencies (Nick Craig-Wood) + * Add snap installation (hideo aoyama) + * Change Winget Releaser job to `ubuntu-latest` (sitiom) + * cmd: Refactor and use sysdnotify in more commands (eNV25) + * config: Add `--multi-thread-chunk-size` flag (Vitor Gomes) + * doc updates (antoinetran, Benjamin, Bjørn Smith, Dean Attali, gabriel-suela, James Braza, Justin Hellings, kapitainsky, Mahad, Masamune3210, Nick Craig-Wood, Nihaal Sangha, Niklas Hambüchen, Raymond Berger, r-ricci, Sawada Tsunayoshi, Tiago Boeing, Vladislav Vorobev) + * fs + * Use atomic types everywhere (Roberto Ricci) + * When `--max-transfer` limit is reached exit with code (10) (kapitainsky) + * Add rclone completion powershell - basic implementation only (Nick Craig-Wood) + * http servers: Allow CORS to be set with `--allow-origin` flag (yuudi) + * lib/rest: Remove unnecessary `nil` check (Eng Zer Jun) + * ncdu: Add keybinding to rescan filesystem (eNV25) + * rc + * Add `executeId` to job listings (yuudi) + * Add `core/du` to measure local disk usage (Nick Craig-Wood) + * Add `operations/settier` to API (Drew Stinnett) + * rclone test info: Add `--check-base32768` flag to check can store all base32768 characters (Nick Craig-Wood) + * rmdirs: Remove directories concurrently controlled by `--checkers` (Nick Craig-Wood) +* Bug Fixes + * accounting: Don't stop calculating average transfer speed until the operation is complete (Jacob Hands) + * fs: Fix `transferTime` not being set in JSON logs (Jacob Hands) + * fshttp: Fix `--bind 0.0.0.0` allowing IPv6 and `--bind ::0` allowing IPv4 (Nick Craig-Wood) + * operations: Fix overlapping check on case insensitive file systems (Nick Craig-Wood) + * serve dlna: Fix MIME type if backend can't identify it (Nick Craig-Wood) + * serve ftp: Fix race condition when using the auth proxy (Nick Craig-Wood) + * serve sftp: Fix hash calculations with `--vfs-cache-mode full` (Nick Craig-Wood) + * serve webdav: Fix error: Expecting fs.Object or fs.Directory, got `nil` (Nick Craig-Wood) + * sync: Fix lockup with `--cutoff-mode=soft` and `--max-duration` (Nick Craig-Wood) +* Mount + * fix: Mount parsing for linux (Anagh Kumar Baranwal) +* VFS + * Add `--vfs-cache-min-free-space` to control minimum free space on the disk containing the cache (Nick Craig-Wood) + * Added cache cleaner for directories to reduce memory usage (Anagh Kumar Baranwal) + * Update parent directory modtimes on vfs actions (David Pedersen) + * Keep virtual directory status accurate and reduce deadlock potential (Anagh Kumar Baranwal) + * Make sure struct field is aligned for atomic access (Roberto Ricci) +* Local + * Rmdir return an error if the path is not a dir (zjx20) +* Azure Blob + * Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood) + * Fix creation of directory markers (Nick Craig-Wood) + * Fix purging with directory markers (Nick Craig-Wood) +* B2 + * Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood) + * Fix rclone link when object path contains special characters (Alishan Ladhani) +* Box + * Add polling support (David Sze) + * Add `--box-impersonate` to impersonate a user ID (Nick Craig-Wood) + * Fix unhelpful decoding of error messages into decimal numbers (Nick Craig-Wood) +* Chunker + * Update documentation to mention issue with small files (Ricardo D'O. Albanus) +* Compress + * Fix ChangeNotify (Nick Craig-Wood) +* Drive + * Add `--drive-fast-list-bug-fix` to control ListR bug workaround (Nick Craig-Wood) +* Fichier + * Implement `DirMove` (Nick Craig-Wood) + * Fix error code parsing (alexia) +* FTP + * Add socks_proxy support for SOCKS5 proxies (Zach) + * Fix 425 "TLS session of data connection not resumed" errors (Nick Craig-Wood) +* Hdfs + * Retry "replication in progress" errors when uploading (Nick Craig-Wood) + * Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood) +* HTTP + * CORS should not be sent if not set (yuudi) + * Fix webdav OPTIONS response (yuudi) +* Opendrive + * Fix List on a just deleted and remade directory (Nick Craig-Wood) +* Oracleobjectstorage + * Use rclone's rate limiter in mutipart transfers (Manoj Ghosh) + * Implement `OpenChunkWriter` and multi-thread uploads (Manoj Ghosh) +* S3 + * Refactor multipart upload to use `OpenChunkWriter` and `ChunkWriter` (Vitor Gomes) + * Factor generic multipart upload into `lib/multipart` (Nick Craig-Wood) + * Fix purging of root directory with `--s3-directory-markers` (Nick Craig-Wood) + * Add `rclone backend set` command to update the running config (Nick Craig-Wood) + * Add `rclone backend restore-status` command (Nick Craig-Wood) +* SFTP + * Stop uploads re-using the same ssh connection to improve performance (Nick Craig-Wood) + * Add `--sftp-ssh` to specify an external ssh binary to use (Nick Craig-Wood) + * Add socks_proxy support for SOCKS5 proxies (Zach) + * Support dynamic `--sftp-path-override` (nielash) + * Fix spurious warning when using `--sftp-ssh` (Nick Craig-Wood) +* Smb + * Implement multi-threaded writes for copies to smb (Edwin Mackenzie-Owen) +* Storj + * Performance improvement for large file uploads (Kaloyan Raev) +* Swift + * Fix HEADing 0-length objects when `--swift-no-large-objects` set (Julian Lepinski) +* Union + * Add `:writback` to act as a simple cache (Nick Craig-Wood) +* WebDAV + * Nextcloud: fix segment violation in low-level retry (Paul) +* Zoho + * Remove Range requests workarounds to fix integration tests (Nick Craig-Wood) + ## v1.63.1 - 2023-07-17 [See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.63.1) diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 735bff7a3..b9f0f7be4 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -54,8 +54,6 @@ rclone [flags] --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) - --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any @@ -81,9 +79,8 @@ rclone [flags] --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key - --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging + --b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings @@ -97,6 +94,7 @@ rclone [flags] --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) + --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point @@ -130,7 +128,7 @@ rclone [flags] --cache-writes Cache file data on writes through the FS --check-first Do all the checks before starting transfers --checkers int Number of checkers to run in parallel (default 8) - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") @@ -181,6 +179,7 @@ rclone [flags] --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") + --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs @@ -434,8 +433,9 @@ rclone [flags] --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --netstorage-account string Set the NetStorage account name --netstorage-host string Domain+path of NetStorage host to connect to @@ -470,6 +470,7 @@ rclone [flags] --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url + --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default "~/.oci/config") @@ -479,7 +480,8 @@ rclone [flags] --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API - --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery + --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery + --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default "env_auth") @@ -532,10 +534,11 @@ rclone [flags] --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) + --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) - --protondrive-password string The password of your proton drive account (obscured) + --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected - --protondrive-username string The username of your proton drive account + --protondrive-username string The username of your proton account --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret @@ -552,6 +555,13 @@ rclone [flags] --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to + --quatrix-api-key string API key for accessing Quatrix account + --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") + --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --quatrix-hard-delete Delete files permanently rather than putting them into the trash + --quatrix-host string Host name of Quatrix account + --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) + --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) -q, --quiet Print as little stuff as possible --rc Enable the remote control server --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) @@ -604,8 +614,6 @@ rclone [flags] --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) - --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it --s3-no-head If set, don't HEAD uploaded objects to check integrity @@ -776,7 +784,7 @@ rclone [flags] --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0-beta.7196.08e40f21b.fix-flag-groups") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md index abec96e6a..7bde4cce5 100644 --- a/docs/content/commands/rclone_bisync.md +++ b/docs/content/commands/rclone_bisync.md @@ -33,17 +33,20 @@ rclone bisync remote1:path1 remote2:path2 [flags] ## Options ``` - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --localtime Use local time in listings (default: UTC) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. + --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) ``` @@ -53,7 +56,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -69,8 +72,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 59646b0d6..061c2b2f4 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -88,7 +88,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -104,8 +104,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 5051c5bf1..8aa99aed1 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -60,7 +60,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -76,8 +76,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 460d776b4..ec85e1f9c 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -543,12 +543,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -565,14 +566,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -838,6 +840,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index ae4df0168..4ce4fd55c 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -64,7 +64,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -80,8 +80,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index ba8fbddb7..074332ea7 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -63,7 +63,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -79,8 +79,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 997f13009..4c0a41b3e 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -44,6 +44,7 @@ press '?' to toggle the help on and off. The supported keys are: y copy current path to clipboard Y display current path ^L refresh screen (fix screen corruption) + r recalculate file sizes ? to toggle help on and off q/ESC/^c to quit diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index da12ec07d..1a89a0726 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -27,7 +27,10 @@ empty directories in. For example the [delete](/commands/rclone_delete/) command will delete files but leave the directory structure (unless used with option `--rmdirs`). -To delete a path and any objects in it, use [purge](/commands/rclone_purge/) +This will delete `--checkers` directories concurrently so +if you have thousands of empty directories consider increasing this number. + +To delete a path and any objects in it, use the [purge](/commands/rclone_purge/) command. diff --git a/docs/content/commands/rclone_selfupdate.md b/docs/content/commands/rclone_selfupdate.md index 6d9cc700c..2942215c5 100644 --- a/docs/content/commands/rclone_selfupdate.md +++ b/docs/content/commands/rclone_selfupdate.md @@ -13,9 +13,10 @@ Update the rclone binary. ## Synopsis -This command downloads the latest release of rclone and replaces -the currently running binary. The download is verified with a hashsum -and cryptographically signed signature. +This command downloads the latest release of rclone and replaces the +currently running binary. The download is verified with a hashsum and +cryptographically signed signature; see [the release signing +docs](/release_signing/) for details. If used without flags (or with implied `--stable` flag), this command will install the latest stable release. However, some issues may be fixed @@ -48,7 +49,7 @@ your OS) to update these too. This command with the default `--package zip` will update only the rclone executable so the local manual may become inaccurate after it. -The `rclone mount` command (https://rclone.org/commands/rclone_mount/) may +The [rclone mount](/commands/rclone_mount/) command may or may not support extended FUSE options depending on the build and OS. `selfupdate` will refuse to update if the capability would be discarded. diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md index e00341f83..b6f29a55e 100644 --- a/docs/content/commands/rclone_serve_dlna.md +++ b/docs/content/commands/rclone_serve_dlna.md @@ -111,12 +111,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -133,14 +134,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -393,6 +395,7 @@ rclone serve dlna remote:path [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md index ce31a1dd6..001fc3bbb 100644 --- a/docs/content/commands/rclone_serve_docker.md +++ b/docs/content/commands/rclone_serve_docker.md @@ -127,12 +127,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -149,14 +150,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -427,6 +429,7 @@ rclone serve docker [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index 3f837d82f..9d18cb384 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -108,12 +108,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -130,14 +131,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -474,6 +476,7 @@ rclone serve ftp remote:path [flags] --user string User name for authentication (default "anonymous") --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index 4068b3355..4df56e77c 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -198,12 +198,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -220,14 +221,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -573,6 +575,7 @@ rclone serve http remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md index ae510e8cf..f1e60c360 100644 --- a/docs/content/commands/rclone_serve_sftp.md +++ b/docs/content/commands/rclone_serve_sftp.md @@ -140,12 +140,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -162,14 +163,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -506,6 +508,7 @@ rclone serve sftp remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index dae7451f3..eb13e6bf4 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -227,12 +227,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -249,14 +250,15 @@ seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags. -If using `--vfs-cache-max-size` note that the cache may exceed this size -for two reasons. Firstly because it is only checked every -`--vfs-cache-poll-interval`. Secondly because open files cannot be -evicted from the cache. When `--vfs-cache-max-size` -is exceeded, rclone will attempt to evict the least accessed files -from the cache first. rclone will start with files that haven't -been accessed for the longest. This cache flushing strategy is -efficient and more relevant files are likely to remain cached. +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. The `--vfs-cache-max-age` will evict files from the cache after the set time since last access has passed. The default value of @@ -604,6 +606,7 @@ rclone serve webdav remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 49e8b9d7a..25c12a08d 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -67,7 +67,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -83,8 +83,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy diff --git a/docs/content/commands/rclone_test_info.md b/docs/content/commands/rclone_test_info.md index 32b656058..7cef53bd9 100644 --- a/docs/content/commands/rclone_test_info.md +++ b/docs/content/commands/rclone_test_info.md @@ -28,6 +28,7 @@ rclone test info [remote:path]+ [flags] ``` --all Run all tests + --check-base32768 Check can store all possible base32768 characters --check-control Check control characters --check-length Check max filename length --check-normalization Check UTF-8 Normalization diff --git a/docs/content/crypt.md b/docs/content/crypt.md index 9cf15b02a..8d16538fc 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -600,7 +600,7 @@ Properties: - Encode using base64. Suitable for case sensitive remote. - "base32768" - Encode using base32768. Suitable if your remote counts UTF-16 or - - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox, Box) + - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) #### --crypt-suffix diff --git a/docs/content/docs.md b/docs/content/docs.md index 093f67b26..f6a14bca2 100644 --- a/docs/content/docs.md +++ b/docs/content/docs.md @@ -1548,7 +1548,7 @@ but not `OpenChunkWriter`) don't have a natural chunk size. In this case the value of this option is used (default 64Mi). -### --multi-thread-cutoff=SIZE ### +### --multi-thread-cutoff=SIZE {#multi-thread-cutoff} When transferring files above SIZE to capable backends, rclone will use multiple threads to transfer the file (default 256M). diff --git a/docs/content/drive.md b/docs/content/drive.md index 02d041015..62ead868f 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -1194,7 +1194,7 @@ This resource key requirement only applies to a subset of old files. Note also that opening the folder once in the web interface (with the user you've authenticated rclone with) seems to be enough so that the -resource key is no needed. +resource key is not needed. Properties: @@ -1204,6 +1204,34 @@ Properties: - Type: string - Required: false +#### --drive-fast-list-bug-fix + +Work around a bug in Google Drive listing. + +Normally rclone will work around a bug in Google Drive when using +--fast-list (ListR) where the search "(A in parents) or (B in +parents)" returns nothing sometimes. See #3114, #4289 and +https://issuetracker.google.com/issues/149522397 + +Rclone detects this by finding no items in more than one directory +when listing and retries them as lists of individual directories. + +This means that if you have a lot of empty directories rclone will end +up listing them all individually and this can take many more API +calls. + +This flag allows the work-around to be disabled. This is **not** +recommended in normal use - only if you have a particular case you are +having trouble with like many empty directories. + + +Properties: + +- Config: fast_list_bug_fix +- Env Var: RCLONE_DRIVE_FAST_LIST_BUG_FIX +- Type: bool +- Default: true + #### --drive-encoding The encoding for the backend. diff --git a/docs/content/flags.md b/docs/content/flags.md index f9ff500f5..833980633 100644 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -15,7 +15,7 @@ Flags for anything which can Copy a file. ``` --check-first Do all the checks before starting transfers - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD") @@ -31,8 +31,9 @@ Flags for anything which can Copy a file. --max-transfer SizeSuffix Maximum size of data to transfer (default off) -M, --metadata If set, preserve metadata when copying objects --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy @@ -110,7 +111,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0-beta.7196.08e40f21b.fix-flag-groups") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0") ``` @@ -318,8 +319,6 @@ Backend only flags. These can be set in the config file also. --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) - --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any @@ -345,9 +344,8 @@ Backend only flags. These can be set in the config file also. --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key - --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging + --b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings @@ -359,6 +357,7 @@ Backend only flags. These can be set in the config file also. --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) + --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point @@ -418,6 +417,7 @@ Backend only flags. These can be set in the config file also. --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") + --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs @@ -636,6 +636,7 @@ Backend only flags. These can be set in the config file also. --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url + --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default "~/.oci/config") @@ -645,7 +646,8 @@ Backend only flags. These can be set in the config file also. --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API - --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery + --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery + --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default "env_auth") @@ -694,10 +696,11 @@ Backend only flags. These can be set in the config file also. --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) + --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) - --protondrive-password string The password of your proton drive account (obscured) + --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected - --protondrive-username string The username of your proton drive account + --protondrive-username string The username of your proton account --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret @@ -714,6 +717,13 @@ Backend only flags. These can be set in the config file also. --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to + --quatrix-api-key string API key for accessing Quatrix account + --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") + --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --quatrix-hard-delete Delete files permanently rather than putting them into the trash + --quatrix-host string Host name of Quatrix account + --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) + --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets @@ -734,8 +744,6 @@ Backend only flags. These can be set in the config file also. --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) - --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it --s3-no-head If set, don't HEAD uploaded objects to check integrity diff --git a/docs/content/ftp.md b/docs/content/ftp.md index 394ad8a42..dea99f0c3 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -415,6 +415,24 @@ Properties: - Type: bool - Default: false +#### --ftp-socks-proxy + +Socks 5 proxy host. + + Supports the format user:pass@host:port, user@host:port, host:port. + + Example: + + myUser:myPass@localhost:9005 + + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_FTP_SOCKS_PROXY +- Type: string +- Required: false + #### --ftp-encoding The encoding for the backend. diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 44db179ac..918b499f6 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -305,10 +305,77 @@ command which will display your usage limit (unless it is unlimited) and the current usage. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}} +### Standard options + +Here are the Standard options specific to jottacloud (Jottacloud). + +#### --jottacloud-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_JOTTACLOUD_CLIENT_ID +- Type: string +- Required: false + +#### --jottacloud-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_JOTTACLOUD_CLIENT_SECRET +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to jottacloud (Jottacloud). +#### --jottacloud-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_JOTTACLOUD_TOKEN +- Type: string +- Required: false + +#### --jottacloud-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_JOTTACLOUD_AUTH_URL +- Type: string +- Required: false + +#### --jottacloud-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_JOTTACLOUD_TOKEN_URL +- Type: string +- Required: false + #### --jottacloud-md5-memory-limit Files bigger than this will be cached on disk to calculate the MD5 if required. diff --git a/docs/content/local.md b/docs/content/local.md index afbfc3afe..42327a940 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -387,7 +387,7 @@ Assume the Stat size of links is zero (and read them instead) (deprecated). Rclone used to use the Stat size of links as the link size, but this fails in quite a few places: - Windows -- On some virtual filesystems (such as LucidLink) +- On some virtual filesystems (such ash LucidLink) - Android So rclone now always reads the link. @@ -562,7 +562,7 @@ Properties: - Config: encoding - Env Var: RCLONE_LOCAL_ENCODING - Type: MultiEncoder -- Default: Slash,InvalidUtf8,Dot +- Default: Slash,Dot ### Metadata diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 6a6619e03..57d3dbbde 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -174,6 +174,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to mailru (Mail.ru Cloud). +#### --mailru-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_MAILRU_CLIENT_ID +- Type: string +- Required: false + +#### --mailru-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_MAILRU_CLIENT_SECRET +- Type: string +- Required: false + #### --mailru-user User name (usually email). @@ -232,6 +258,43 @@ Properties: Here are the Advanced options specific to mailru (Mail.ru Cloud). +#### --mailru-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_MAILRU_TOKEN +- Type: string +- Required: false + +#### --mailru-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_MAILRU_AUTH_URL +- Type: string +- Required: false + +#### --mailru-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_MAILRU_TOKEN_URL +- Type: string +- Required: false + #### --mailru-speedup-file-patterns Comma separated list of file name patterns eligible for speedup (put by hash). diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md index 39d40a5cd..9324b3e8d 100644 --- a/docs/content/premiumizeme.md +++ b/docs/content/premiumizeme.md @@ -108,6 +108,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to premiumizeme (premiumize.me). +#### --premiumizeme-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +- Type: string +- Required: false + +#### --premiumizeme-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +- Type: string +- Required: false + #### --premiumizeme-api-key API Key. @@ -126,6 +152,43 @@ Properties: Here are the Advanced options specific to premiumizeme (premiumize.me). +#### --premiumizeme-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PREMIUMIZEME_TOKEN +- Type: string +- Required: false + +#### --premiumizeme-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +- Type: string +- Required: false + +#### --premiumizeme-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +- Type: string +- Required: false + #### --premiumizeme-encoding The encoding for the backend. diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md index 533a0d442..b3bb9f7aa 100644 --- a/docs/content/protondrive.md +++ b/docs/content/protondrive.md @@ -130,7 +130,7 @@ Here are the Standard options specific to protondrive (Proton Drive). #### --protondrive-username -The username of your proton drive account +The username of your proton account Properties: @@ -141,7 +141,7 @@ Properties: #### --protondrive-password -The password of your proton drive account. +The password of your proton account. **NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). @@ -172,6 +172,68 @@ Properties: Here are the Advanced options specific to protondrive (Proton Drive). +#### --protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + +**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +#### --protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +#### --protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + #### --protondrive-encoding The encoding for the backend. diff --git a/docs/content/putio.md b/docs/content/putio.md index 1c45a8d73..e0db46850 100644 --- a/docs/content/putio.md +++ b/docs/content/putio.md @@ -115,10 +115,77 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}} +### Standard options + +Here are the Standard options specific to putio (Put.io). + +#### --putio-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PUTIO_CLIENT_ID +- Type: string +- Required: false + +#### --putio-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PUTIO_CLIENT_SECRET +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to putio (Put.io). +#### --putio-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PUTIO_TOKEN +- Type: string +- Required: false + +#### --putio-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PUTIO_AUTH_URL +- Type: string +- Required: false + +#### --putio-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PUTIO_TOKEN_URL +- Type: string +- Required: false + #### --putio-encoding The encoding for the backend. diff --git a/docs/content/rc.md b/docs/content/rc.md index 189951de2..86a2c6074 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -732,6 +732,28 @@ OR **Authentication is required for this call.** +### core/du: Returns disk usage of a locally attached disk. {#core-du} + +This returns the disk usage for the local directory passed in as dir. + +If the directory is not passed in, it defaults to the directory +pointed to by --cache-dir. + +- dir - string (optional) + +Returns: + +``` +{ + "dir": "/", + "info": { + "Available": 361769115648, + "Free": 361785892864, + "Total": 982141468672 + } +} +``` + ### core/gc: Runs a garbage collection. {#core-gc} This tells the go runtime to do a garbage collection run. It isn't @@ -811,6 +833,10 @@ Returns the following values: "lastError": last error string, "renames" : number of files renamed, "retryError": boolean showing whether there has been at least one non-NoRetryError, + "serverSideCopies": number of server side copies done, + "serverSideCopyBytes": number bytes server side copied, + "serverSideMoves": number of server side moves done, + "serverSideMoveBytes": number bytes server side moved, "speed": average speed in bytes per second since start of the group, "totalBytes": total number of bytes in the group, "totalChecks": total number of checks in the group, @@ -1012,7 +1038,8 @@ Parameters: None. Results: -- jobids - array of integer job ids. +- executeId - string id of rclone executing (change after restart) +- jobids - array of integer job ids (starting at 1 on each restart) ### job/status: Reads the status of the job ID {#job-status} @@ -1415,6 +1442,27 @@ See the [rmdirs](/commands/rclone_rmdirs/) command for more information on the a **Authentication is required for this call.** +### operations/settier: Changes storage tier or class on all files in the path {#operations-settier} + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" + +See the [settier](/commands/rclone_settier/) command for more information on the above. + +**Authentication is required for this call.** + +### operations/settierfile: Changes storage tier or class on the single file pointed to {#operations-settierfile} + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" + +See the [settierfile](/commands/rclone_settierfile/) command for more information on the above. + +**Authentication is required for this call.** + ### operations/size: Count the number of bytes and files in remote {#operations-size} This takes the following parameters: @@ -1654,13 +1702,13 @@ This takes the following parameters - checkSync - `true` by default, `false` disables comparison of final listings, `only` will skip sync, only compare listings from the last run - createEmptySrcDirs - Sync creation and deletion of empty directories. - (Not compatible with --remove-empty-dirs) + (Not compatible with --remove-empty-dirs) - removeEmptyDirs - remove empty directories at the final cleanup step - filtersFile - read filtering patterns from a file - ignoreListingChecksum - Do not use checksums for listings - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. Use at your own risk! -- workdir - Use custom working directory (default: `~/.cache/rclone/bisync`) +- workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) - noCleanup - retain working files See [bisync command help](https://rclone.org/commands/rclone_bisync/) diff --git a/docs/content/s3.md b/docs/content/s3.md index 7b9e6369c..3f4d37e35 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -664,7 +664,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}} ### Standard options -Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). #### --s3-provider @@ -705,6 +705,8 @@ Properties: - IONOS Cloud - "LyveCloud" - Seagate Lyve Cloud + - "Leviia" + - Leviia Object Storage - "Liara" - Liara Object Storage - "Minio" @@ -1078,6 +1080,30 @@ Properties: #### --s3-region +Region where your data stored. + + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Synology +- Type: string +- Required: false +- Examples: + - "eu-001" + - Europe Region 1 + - "eu-002" + - Europe Region 2 + - "us-001" + - US Region 1 + - "us-002" + - US Region 2 + - "tw-001" + - Asia (Taiwan) + +#### --s3-region + Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. @@ -1392,6 +1418,22 @@ Properties: #### --s3-endpoint +Endpoint for Leviia Object Storage API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: Leviia +- Type: string +- Required: false +- Examples: + - "s3.leviia.com" + - The default endpoint + - Leviia + +#### --s3-endpoint + Endpoint for Liara Object Storage API. Properties: @@ -1593,15 +1635,15 @@ Properties: - Required: false - Examples: - "eu-001.s3.synologyc2.net" - - Europe Region 1 + - EU Endpoint 1 - "eu-002.s3.synologyc2.net" - - Europe Region 2 + - EU Endpoint 2 - "us-001.s3.synologyc2.net" - - US Region 1 + - US Endpoint 1 - "us-002.s3.synologyc2.net" - - US Region 2 + - US Endpoint 2 - "tw-001.s3.synologyc2.net" - - Asia Region (Taiwan) + - TW Endpoint 1 #### --s3-endpoint @@ -2130,7 +2172,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox +- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox - Type: string - Required: false @@ -2153,7 +2195,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Cloudflare +- Provider: !Storj,Synology,Cloudflare - Type: string - Required: false - Examples: @@ -2408,7 +2450,7 @@ Properties: ### Advanced options -Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi). #### --s3-bucket-acl @@ -2906,10 +2948,7 @@ Properties: #### --s3-memory-pool-flush-time -How often internal memory buffer pools will be flushed. - -Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. -This option controls how often unused buffers will be removed from the pool. +How often internal memory buffer pools will be flushed. (no longer used) Properties: @@ -2920,7 +2959,7 @@ Properties: #### --s3-memory-pool-use-mmap -Whether to use mmap buffers in internal memory pool. +Whether to use mmap buffers in internal memory pool. (no longer used) Properties: @@ -3186,17 +3225,17 @@ to normal storage. Usage Examples: - rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] - rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] + rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS + rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags - rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 All the objects shown will be marked for restore, then - rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard + rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message @@ -3205,11 +3244,11 @@ if not. [ { "Status": "OK", - "Path": "test.txt" + "Remote": "test.txt" }, { "Status": "OK", - "Path": "test/file4.txt" + "Remote": "test/file4.txt" } ] @@ -3221,6 +3260,51 @@ Options: - "lifetime": Lifetime of the active copy in days - "priority": Priority of restore: Standard|Expedited|Bulk +### restore-status + +Show the restore status for objects being restored from GLACIER to normal storage + + rclone backend restore-status remote: [options] [+] + +This command can be used to show the status for objects being restored from GLACIER +to normal storage. + +Usage Examples: + + rclone backend restore-status s3:bucket/path/to/object + rclone backend restore-status s3:bucket/path/to/directory + rclone backend restore-status -o all s3:bucket/path/to/directory + +This command does not obey the filters. + +It returns a list of status dictionaries. + + [ + { + "Remote": "file.txt", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "GLACIER" + }, + { + "Remote": "test.pdf", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": false, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" + }, + "StorageClass": "DEEP_ARCHIVE" + } + ] + + +Options: + +- "all": if set then show all objects, not just ones with restore status + ### list-multipart-uploads List the unfinished multipart uploads @@ -3315,6 +3399,30 @@ It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning has been enabled the status can't be set back to "Unversioned". +### set + +Set command for updating the config parameters. + + rclone backend set remote: [options] [+] + +This set command can be used to update the config parameters +for a running s3 backend. + +Usage Examples: + + rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] + rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X + +The option keys are named as they are in the config file. + +This rebuilds the connection to the s3 backend when it is called with +the new parameters. Only new parameters need be passed as the values +will default to those currently in use. + +It doesn't return anything. + + {{< rem autogenerated options stop >}} ### Anonymous access to public buckets diff --git a/docs/content/sftp.md b/docs/content/sftp.md index 643d46765..f0d4a5651 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -556,6 +556,42 @@ Properties: - Type: bool - Default: false +#### --sftp-ssh + +Path and arguments to external ssh binary. + +Normally rclone will use its internal ssh library to connect to the +SFTP server. However it does not implement all possible ssh options so +it may be desirable to use an external ssh binary. + +Rclone ignores all the internal config if you use this option and +expects you to configure the ssh binary with the user/host/port and +any other options you need. + +**Important** The ssh command must log in without asking for a +password so needs to be configured with keys or certificates. + +Rclone will run the command supplied either with the additional +arguments "-s sftp" to access the SFTP subsystem or with commands such +as "md5sum /path/to/file" appended to read checksums. + +Any arguments with spaces in should be surrounded by "double quotes". + +An example setting might be: + + ssh -o ServerAliveInterval=20 user@example.com + +Note that when using an external ssh binary rclone makes a new ssh +connection for every hash it calculates. + + +Properties: + +- Config: ssh +- Env Var: RCLONE_SFTP_SSH +- Type: SpaceSepList +- Default: + ### Advanced options Here are the Advanced options specific to sftp (SSH/SFTP). @@ -608,6 +644,18 @@ E.g. if shared folders can be found in directories representing volumes: E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory + +To specify only the path to the SFTP remote's root, and allow rclone to add any relative subpaths automatically (including unwrapping/decrypting remotes as necessary), add the '@' character to the beginning of the path. + +E.g. the first example above could be rewritten as: + + rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2 + +Note that when using this method with Synology "home" folders, the full "/homes/USER" path should be specified instead of "/home". + +E.g. the second example above should be rewritten as: + + rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1 Properties: @@ -703,6 +751,15 @@ Specifies the path or command to run a sftp server on the remote host. The subsystem option is ignored when server_command is defined. +If adding server_command to the configuration file please note that +it should not be enclosed in quotes, since that will make rclone fail. + +A working example is: + + [remote_name] + type = sftp + server_command = sudo /usr/libexec/openssh/sftp-server + Properties: - Config: server_command @@ -941,6 +998,24 @@ Properties: - Type: SpaceSepList - Default: +#### --sftp-socks-proxy + +Socks 5 proxy host. + +Supports the format user:pass@host:port, user@host:port, host:port. + +Example: + + myUser:myPass@localhost:9005 + + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_SFTP_SOCKS_PROXY +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index a86b36d79..a362f1dcc 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -154,6 +154,32 @@ as they can't be used in JSON strings. Here are the Standard options specific to sharefile (Citrix Sharefile). +#### --sharefile-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_SHAREFILE_CLIENT_ID +- Type: string +- Required: false + +#### --sharefile-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_SHAREFILE_CLIENT_SECRET +- Type: string +- Required: false + #### --sharefile-root-folder-id ID of the root folder. @@ -183,6 +209,43 @@ Properties: Here are the Advanced options specific to sharefile (Citrix Sharefile). +#### --sharefile-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_SHAREFILE_TOKEN +- Type: string +- Required: false + +#### --sharefile-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_SHAREFILE_AUTH_URL +- Type: string +- Required: false + +#### --sharefile-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_SHAREFILE_TOKEN_URL +- Type: string +- Required: false + #### --sharefile-upload-cutoff Cutoff for switching to multipart upload. diff --git a/rclone.1 b/rclone.1 index 92ff6ca63..bdf35b4d7 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,7 +1,7 @@ .\"t .\" Automatically generated by Pandoc 2.9.2.1 .\" -.TH "rclone" "1" "Jun 30, 2023" "User Manual" "" +.TH "rclone" "1" "Sep 11, 2023" "User Manual" "" .hy .SH Rclone syncs your files to cloud storage .PP @@ -24,7 +24,7 @@ Donate. (https://rclone.org/donate/) Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors\[aq] web storage interfaces. -Over 40 cloud storage products support rclone including S3 object +Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. .PP @@ -205,6 +205,8 @@ IONOS Cloud .IP \[bu] 2 Koofr .IP \[bu] 2 +Leviia Object Storage +.IP \[bu] 2 Liara Object Storage .IP \[bu] 2 Mail.ru Cloud @@ -247,10 +249,14 @@ premiumize.me .IP \[bu] 2 put.io .IP \[bu] 2 +Proton Drive +.IP \[bu] 2 QingStor .IP \[bu] 2 Qiniu Cloud Object Storage (Kodo) .IP \[bu] 2 +Quatrix by Maytech +.IP \[bu] 2 Rackspace Cloud Files .IP \[bu] 2 rsync.net @@ -273,6 +279,8 @@ StackPath .IP \[bu] 2 Storj .IP \[bu] 2 +Synology +.IP \[bu] 2 SugarSync .IP \[bu] 2 Tencent Cloud Object Storage (COS) @@ -340,6 +348,9 @@ run \f[C]rclone -h\f[R]. Already installed rclone can be easily updated to the latest version using the rclone selfupdate (https://rclone.org/commands/rclone_selfupdate/) command. +.PP +See the release signing docs (https://rclone.org/release_signing/) for +how to verify signatures on the release. .SS Script installation .PP To install rclone on Linux/macOS/BSD systems, run: @@ -713,6 +724,41 @@ ls \[ti]/data/mount kill %1 \f[R] .fi +.SS Snap installation +.PP +[IMAGE: Get it from the Snap +Store (https://snapcraft.io/static/images/badges/en/snap-store-black.svg)] (https://snapcraft.io/rclone) +.PP +Make sure you have Snapd +installed (https://snapcraft.io/docs/installing-snapd) +.IP +.nf +\f[C] +$ sudo snap install rclone +\f[R] +.fi +.PP +Due to the strict confinement of Snap, rclone snap cannot acess real +/home/$USER/.config/rclone directory, default config path is as below. +.IP \[bu] 2 +Default config directory: +.RS 2 +.IP \[bu] 2 +/home/$USER/snap/rclone/current/.config/rclone +.RE +.PP +Note: Due to the strict confinement of Snap, \f[C]rclone mount\f[R] +feature is \f[C]not\f[R] supported. +.PP +If mounting is wanted, either install a precompiled binary or enable the +relevant option when installing from source. +.PP +Note that this is controlled by community +maintainer (https://github.com/boukendesho/rclone-snap) not the rclone +developers so it may be out of date. +Its current version is as below. +.PP +[IMAGE: rclone (https://snapcraft.io/rclone/badge.svg)] (https://snapcraft.io/rclone) .SS Source installation .PP Make sure you have git and Go (https://golang.org/) installed. @@ -1157,8 +1203,12 @@ premiumize.me (https://rclone.org/premiumizeme/) .IP \[bu] 2 put.io (https://rclone.org/putio/) .IP \[bu] 2 +Proton Drive (https://rclone.org/protondrive/) +.IP \[bu] 2 QingStor (https://rclone.org/qingstor/) .IP \[bu] 2 +Quatrix by Maytech (https://rclone.org/quatrix/) +.IP \[bu] 2 Seafile (https://rclone.org/seafile/) .IP \[bu] 2 SFTP (https://rclone.org/sftp/) @@ -1238,7 +1288,7 @@ rclone config [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1256,6 +1306,9 @@ Disconnects user from remote rclone config dump (https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON. .IP \[bu] 2 +rclone config edit (https://rclone.org/commands/rclone_config_edit/) - +Enter an interactive configuration session. +.IP \[bu] 2 rclone config file (https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use. .IP \[bu] 2 @@ -1274,6 +1327,11 @@ rclone config reconnect (https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote. .IP \[bu] 2 +rclone config +redacted (https://rclone.org/commands/rclone_config_redacted/) - Print +redacted (decrypted) config file, or the redacted config for a single +remote. +.IP \[bu] 2 rclone config show (https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. .IP \[bu] 2 @@ -1386,10 +1444,99 @@ rclone copy source:path dest:path [flags] -h, --help help for copy \f[R] .fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1458,10 +1605,118 @@ rclone sync source:path dest:path [flags] -h, --help help for sync \f[R] .fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Sync Options +.PP +Flags just used for \f[C]rclone sync\f[R]. +.IP +.nf +\f[C] + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default \[dq]hash\[dq]) +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1515,10 +1770,99 @@ rclone move source:path dest:path [flags] -h, --help help for move \f[R] .fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1580,10 +1924,61 @@ rclone delete remote:path [flags] --rmdirs rmdirs removes empty directories but leaves root intact \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1616,10 +2011,21 @@ rclone purge remote:path [flags] -h, --help help for purge \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1639,10 +2045,21 @@ rclone mkdir remote:path [flags] -h, --help help for mkdir \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1673,10 +2090,21 @@ rclone rmdir remote:path [flags] -h, --help help for rmdir \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1766,10 +2194,59 @@ rclone check source:path dest:path [flags] --one-way Check one way only, source files must exist on remote \f[R] .fi +.SS Check Options +.PP +Flags used for \f[C]rclone check\f[R]. +.IP +.nf +\f[C] + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1835,10 +2312,50 @@ rclone ls remote:path [flags] -h, --help help for ls \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1920,10 +2437,50 @@ rclone lsd remote:path [flags] -R, --recursive Recurse into the listing \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -1989,10 +2546,50 @@ rclone lsl remote:path [flags] -h, --help help for lsl \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2035,10 +2632,50 @@ rclone md5sum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2084,10 +2721,50 @@ rclone sha1sum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2126,10 +2803,50 @@ rclone size remote:path [flags] --json Format output as JSON \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2203,7 +2920,7 @@ rclone version [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2228,10 +2945,21 @@ rclone cleanup remote:path [flags] -h, --help help for cleanup \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2403,10 +3131,21 @@ rclone dedupe [mode] remote:path [flags] -h, --help help for dedupe \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2502,7 +3241,7 @@ rclone about remote: [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2539,7 +3278,7 @@ rclone authorize [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2606,10 +3345,21 @@ rclone backend remote:path [opts] [flags] -o, --option stringArray Option in the form name=value or name \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2640,23 +3390,105 @@ rclone bisync remote1:path1 remote2:path2 [flags] .IP .nf \f[C] - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default \[dq]true\[dq]) - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default \[dq]true\[dq]) + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --localtime Use local time in listings (default: UTC) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. + --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) +\f[R] +.fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) \f[R] .fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2740,10 +3572,50 @@ rclone cat remote:path [flags] --tail int Only print the last N characters \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2822,10 +3694,50 @@ rclone checksum sumfile src:path [flags] --one-way Check one way only, source files must exist on remote \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2846,7 +3758,7 @@ Run with \f[C]--help\f[R] to list the supported shells. .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2860,6 +3772,10 @@ fish (https://rclone.org/commands/rclone_completion_fish/) - Output fish completion script for rclone. .IP \[bu] 2 rclone completion +powershell (https://rclone.org/commands/rclone_completion_powershell/) - +Output powershell completion script for rclone. +.IP \[bu] 2 +rclone completion zsh (https://rclone.org/commands/rclone_completion_zsh/) - Output zsh completion script for rclone. .SH rclone completion bash @@ -2907,7 +3823,7 @@ rclone completion bash [output_file] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. @@ -2956,14 +3872,14 @@ rclone completion fish [output_file] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. .SH rclone completion powershell .PP -Generate the autocompletion script for powershell -.SH Synopsis +Output powershell completion script for rclone. +.SS Synopsis .PP Generate the autocompletion script for powershell. .PP @@ -2977,18 +3893,20 @@ rclone completion powershell | Out-String | Invoke-Expression .PP To load completions for every new session, add the output of the above command to your powershell profile. +.PP +If output_file is \[dq]-\[dq] or missing, then the output will be +written to stdout. .IP .nf \f[C] -rclone completion powershell [flags] +rclone completion powershell [output_file] [flags] \f[R] .fi -.SH Options +.SS Options .IP .nf \f[C] - -h, --help help for powershell - --no-descriptions disable completion descriptions + -h, --help help for powershell \f[R] .fi .PP @@ -2997,7 +3915,7 @@ not listed here. .SH SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - -Generate the autocompletion script for the specified shell +Output completion script for a given shell. .SH rclone completion zsh .PP Output zsh completion script for rclone. @@ -3043,7 +3961,7 @@ rclone completion zsh [output_file] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. @@ -3208,7 +4126,7 @@ rclone config create name type [key value]* [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3231,7 +4149,7 @@ rclone config delete name [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3261,7 +4179,7 @@ rclone config disconnect remote: [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3284,14 +4202,14 @@ rclone config dump [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. .SH rclone config edit .PP Enter an interactive configuration session. -.SH Synopsis +.SS Synopsis .PP Enter an interactive configuration session where you can setup new remotes and manage existing ones. @@ -3302,7 +4220,7 @@ You may also set or remove a password to protect your configuration. rclone config edit [flags] \f[R] .fi -.SH Options +.SS Options .IP .nf \f[C] @@ -3335,7 +4253,7 @@ rclone config file [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3376,7 +4294,7 @@ rclone config password name [key value]+ [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3399,7 +4317,7 @@ rclone config paths [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3422,7 +4340,7 @@ rclone config providers [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3452,7 +4370,43 @@ rclone config reconnect remote: [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO +.IP \[bu] 2 +rclone config (https://rclone.org/commands/rclone_config/) - Enter an +interactive configuration session. +.SH rclone config redacted +.PP +Print redacted (decrypted) config file, or the redacted config for a +single remote. +.SS Synopsis +.PP +This prints a redacted copy of the config file, either the whole config +file or for a given remote. +.PP +The config file will be redacted by replacing all passwords and other +sensitive info with XXX. +.PP +This makes the config file suitable for posting online for support. +.PP +It should be double checked before posting as the redaction may not be +perfect. +.IP +.nf +\f[C] +rclone config redacted [] [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + -h, --help help for redacted +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3475,7 +4429,7 @@ rclone config show [] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3498,7 +4452,7 @@ rclone config touch [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3663,7 +4617,7 @@ rclone config update name [key value]+ [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3691,7 +4645,7 @@ rclone config userinfo remote: [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. @@ -3750,10 +4704,99 @@ rclone copyto source:path dest:path [flags] -h, --help help for copyto \f[R] .fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3797,10 +4840,21 @@ rclone copyurl https://example.com dest:path [flags] --stdout Write the output to stdout rather than a file \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3899,10 +4953,59 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --one-way Check one way only, source files must exist on remote \f[R] .fi +.SS Check Options +.PP +Flags used for \f[C]rclone check\f[R]. +.IP +.nf +\f[C] + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3949,7 +5052,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -3975,10 +5078,21 @@ rclone deletefile remote:path [flags] -h, --help help for deletefile \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4190,7 +5304,7 @@ rclone gendocs output_directory [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4262,10 +5376,50 @@ rclone hashsum remote:path [flags] --output-file string Output hashsums to a file rather than the terminal \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4318,7 +5472,7 @@ rclone link remote:path [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4348,7 +5502,7 @@ rclone listremotes [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4538,10 +5692,50 @@ rclone lsf remote:path [flags] -s, --separator string Separator for the items in the format (default \[dq];\[dq]) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -4691,10 +5885,50 @@ rclone lsjson remote:path [flags] --stat Just return the info for the pointed to file \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5380,12 +6614,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -5405,12 +6640,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -5733,6 +6970,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -5749,10 +6987,40 @@ rclone mount remote:path /path/to/mountpoint [flags] --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5814,10 +7082,99 @@ rclone moveto source:path dest:path [flags] -h, --help help for moveto \f[R] .fi +.SS Copy Options +.PP +Flags for anything which can Copy a file. +.IP +.nf +\f[C] + --check-first Do all the checks before starting transfers + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). + --compare-dest stringArray Include additional comma separated server-side paths during comparison + --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination + --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum + -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + --immutable Do not modify files, fail if existing files have been modified + --inplace Download directly to destination file instead of atomic download to temp/rename + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) + --max-duration Duration Maximum duration rclone will transfer data for (default 0s) + --max-transfer SizeSuffix Maximum size of data to transfer (default off) + -M, --metadata If set, preserve metadata when copying objects + --modify-window Duration Max time diff to be considered the same (default 1ns) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) + --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) + --no-check-dest Don\[aq]t check the destination, copy regardless + --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-modtime Don\[aq]t update destination mod-time if files identical + --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] + --refresh-times Refresh the modtime of remote files + --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs + --size-only Skip based on size only, not mod-time or checksum + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) + -u, --update Skip files that are newer on the destination +\f[R] +.fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5858,6 +7215,7 @@ The supported keys are: y copy current path to clipboard Y display current path \[ha]L refresh screen (fix screen corruption) + r recalculate file sizes ? to toggle help on and off q/ESC/\[ha]c to quit \f[R] @@ -5907,10 +7265,50 @@ rclone ncdu remote:path [flags] -h, --help help for ncdu \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -5963,7 +7361,7 @@ rclone obscure password [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6071,7 +7469,7 @@ rclone rc commands parameter [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6134,10 +7532,21 @@ rclone rcat remote:path [flags] --size int File size hint to preallocate (default -1) \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6357,10 +7766,46 @@ rclone rcd * [flags] -h, --help help for rcd \f[R] .fi +.SS RC Options +.PP +Flags to control the Remote Control API. +.IP +.nf +\f[C] + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq]) + --rc-no-auth Don\[aq]t require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default \[dq]dlPL2MqE\[dq]) + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default \[dq]https://api.github.com/repos/rclone/rclone-webui-react/releases/latest\[dq]) + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don\[aq]t open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6383,7 +7828,10 @@ For example the delete (https://rclone.org/commands/rclone_delete/) command will delete files but leave the directory structure (unless used with option \f[C]--rmdirs\f[R]). .PP -To delete a path and any objects in it, use +This will delete \f[C]--checkers\f[R] directories concurrently so if you +have thousands of empty directories consider increasing this number. +.PP +To delete a path and any objects in it, use the purge (https://rclone.org/commands/rclone_purge/) command. .IP .nf @@ -6399,10 +7847,21 @@ rclone rmdirs remote:path [flags] --leave-root Do not remove root directory if empty \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6414,7 +7873,8 @@ Update the rclone binary. This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed -signature. +signature; see the release signing +docs (https://rclone.org/release_signing/) for details. .PP If used without flags (or with implied \f[C]--stable\f[R] flag), this command will install the latest stable release. @@ -6459,9 +7919,8 @@ This command with the default \f[C]--package zip\f[R] will update only the rclone executable so the local manual may become inaccurate after it. .PP -The \f[C]rclone mount\f[R] command -(https://rclone.org/commands/rclone_mount/) may or may not support -extended FUSE options depending on the build and OS. +The rclone mount (https://rclone.org/commands/rclone_mount/) command may +or may not support extended FUSE options depending on the build and OS. \f[C]selfupdate\f[R] will refuse to update if the capability would be discarded. .PP @@ -6497,7 +7956,7 @@ rclone selfupdate [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6532,7 +7991,7 @@ rclone serve [opts] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -6681,12 +8140,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -6706,12 +8166,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -7021,6 +8483,7 @@ rclone serve dlna remote:path [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -7035,10 +8498,40 @@ rclone serve dlna remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -7191,12 +8684,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -7216,12 +8710,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -7549,6 +9045,7 @@ rclone serve docker [flags] --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -7565,10 +9062,40 @@ rclone serve docker [flags] --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -7691,12 +9218,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -7716,12 +9244,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -8127,6 +9657,7 @@ rclone serve ftp remote:path [flags] --user string User name for authentication (default \[dq]anonymous\[dq]) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -8141,10 +9672,40 @@ rclone serve ftp remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -8445,12 +10006,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -8470,12 +10032,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -8860,6 +10424,7 @@ rclone serve http remote:path [flags] .nf \f[C] --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -8889,6 +10454,7 @@ rclone serve http remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -8903,10 +10469,40 @@ rclone serve http remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -9107,6 +10703,7 @@ rclone serve restic remote:path [flags] .nf \f[C] --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --append-only Disallow deletion of repository data --baseurl string Prefix for URLs - leave blank for root --cache-objects Cache listed objects (default true) @@ -9130,7 +10727,7 @@ rclone serve restic remote:path [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -9295,12 +10892,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -9320,12 +10918,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -9731,6 +11331,7 @@ rclone serve sftp remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -9745,10 +11346,40 @@ rclone serve sftp remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -10082,12 +11713,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +--cache-dir string Directory rclone will use for caching. +--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) +--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) +--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) +--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) +--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) +--vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -10107,12 +11739,14 @@ seconds. If rclone is quit or dies with files that haven\[aq]t been uploaded, these will be uploaded next time rclone is run with the same flags. .PP -If using \f[C]--vfs-cache-max-size\f[R] note that the cache may exceed -this size for two reasons. +If using \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these +quotas for two reasons. Firstly because it is only checked every \f[C]--vfs-cache-poll-interval\f[R]. Secondly because open files cannot be evicted from the cache. -When \f[C]--vfs-cache-max-size\f[R] is exceeded, rclone will attempt to +When \f[C]--vfs-cache-max-size\f[R] or +\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven\[aq]t been accessed for the longest. @@ -10497,6 +12131,7 @@ rclone serve webdav remote:path [flags] .nf \f[C] --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080]) + --allow-origin string Origin which cross-domain request (CORS) can be executed from --auth-proxy string A program to use to create the backend from the auth --baseurl string Prefix for URLs - leave blank for root --cert string TLS PEM key (concatenation of certificate and CA certificate) @@ -10528,6 +12163,7 @@ rclone serve webdav remote:path [flags] --user string User name for authentication --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match @@ -10542,10 +12178,40 @@ rclone serve webdav remote:path [flags] --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. @@ -10606,7 +12272,7 @@ rclone settier tier remote:path [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -10639,7 +12305,7 @@ things so reading their documentation first is recommended. .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -10684,7 +12350,7 @@ rclone test changenotify remote: [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10714,7 +12380,7 @@ rclone test histogram [remote:path] [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10742,6 +12408,7 @@ rclone test info [remote:path]+ [flags] .nf \f[C] --all Run all tests + --check-base32768 Check can store all possible base32768 characters --check-control Check control characters --check-length Check max filename length --check-normalization Check UTF-8 Normalization @@ -10754,7 +12421,7 @@ rclone test info [remote:path]+ [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10783,7 +12450,7 @@ rclone test makefile []+ [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10819,7 +12486,7 @@ rclone test makefiles [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10842,7 +12509,7 @@ rclone test memory remote:path [flags] .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command @@ -10894,10 +12561,61 @@ rclone touch remote:path [flags] -t, --timestamp string Use specified time instead of the current time of day \f[R] .fi +.SS Important Options +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -10969,10 +12687,50 @@ rclone tree remote:path [flags] --version Sort files alphanumerically by version \f[R] .fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing Options +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi .PP See the global flags page (https://rclone.org/flags/) for global options not listed here. -.SS SEE ALSO +.SH SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -11284,6 +13042,11 @@ cautious to use characters that are possible to write in all of them. This is mostly a problem on Windows, where the console traditionally uses a non-Unicode character set - defined by the so-called \[dq]code page\[dq]. +.PP +Do not use single character names on Windows as it creates ambiguity +with Windows drives\[aq] names, e.g.: remote called \f[C]C\f[R] is +indistinguishable from \f[C]C\f[R] drive. +Rclone will always assume that single letter name refers to a drive. .SS Quoting and the shell .PP When you are typing commands to your computer you are using something @@ -11711,6 +13474,10 @@ This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn\[aq]t resolve or resolves to more than one IP address it will give an error. +.PP +You can use \f[C]--bind 0.0.0.0\f[R] to force rclone to use IPv4 +addresses and \f[C]--bind ::0\f[R] to force rclone to use IPv6 +addresses. .SS --bwlimit=BANDWIDTH_SPEC .PP This option controls the bandwidth limit. @@ -12677,22 +14444,37 @@ Test first with \f[C]--dry-run\f[R] if you are not sure what will happen. .SS --max-duration=TIME .PP -Rclone will stop scheduling new transfers when it has run for the -duration specified. -.PP +Rclone will stop transferring when it has run for the duration +specified. Defaults to off. .PP -When the limit is reached any existing transfers will complete. +When the limit is reached all transfers will stop immediately. +Use \f[C]--cutoff-mode\f[R] to modify this behaviour. .PP -Rclone won\[aq]t exit with an error if the transfer limit is reached. +Rclone will exit with exit code 10 if the duration limit is reached. .SS --max-transfer=SIZE .PP Rclone will stop transferring when it has reached the size specified. Defaults to off. .PP When the limit is reached all transfers will stop immediately. +Use \f[C]--cutoff-mode\f[R] to modify this behaviour. .PP Rclone will exit with exit code 8 if the transfer limit is reached. +.SS --cutoff-mode=hard|soft|cautious +.PP +This modifies the behavior of \f[C]--max-transfer\f[R] and +\f[C]--max-duration\f[R] Defaults to \f[C]--cutoff-mode=hard\f[R]. +.PP +Specifying \f[C]--cutoff-mode=hard\f[R] will stop transferring +immediately when Rclone reaches the limit. +.PP +Specifying \f[C]--cutoff-mode=soft\f[R] will stop starting new transfers +when Rclone reaches the limit. +.PP +Specifying \f[C]--cutoff-mode=cautious\f[R] will try to prevent Rclone +from reaching the limit. +Only applicable for \f[C]--max-transfer\f[R] .SS -M, --metadata .PP Setting this flag enables rclone to copy the metadata from the source to @@ -12704,19 +14486,6 @@ See the #metadata for more info. Add metadata \f[C]key\f[R] = \f[C]value\f[R] when uploading. This can be repeated as many times as required. See the #metadata for more info. -.SS --cutoff-mode=hard|soft|cautious -.PP -This modifies the behavior of \f[C]--max-transfer\f[R] Defaults to -\f[C]--cutoff-mode=hard\f[R]. -.PP -Specifying \f[C]--cutoff-mode=hard\f[R] will stop transferring -immediately when Rclone reaches the limit. -.PP -Specifying \f[C]--cutoff-mode=soft\f[R] will stop starting new transfers -when Rclone reaches the limit. -.PP -Specifying \f[C]--cutoff-mode=cautious\f[R] will try to prevent Rclone -from reaching the limit. .SS --modify-window=TIME .PP When checking whether a file has been modified, this is the maximum @@ -12731,12 +14500,12 @@ if you are reading and writing to an OS X filing system this will be This command line flag allows you to override that computed default. .SS --multi-thread-write-buffer-size=SIZE .PP -When downloading with multiple threads, rclone will buffer SIZE bytes in -memory before writing to disk for each thread. +When transferring with multiple threads, rclone will buffer SIZE bytes +in memory before writing to disk for each thread. .PP This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so -if you see downloads being limited by disk write speed, you might want +if you see transfers being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful. @@ -12749,65 +14518,71 @@ As a final hint, size is not the only factor: block size (or similar concept) can have an impact. In one case, we observed that exact multiples of 16k performed much better than other values. +.SS --multi-thread-chunk-size=SizeSuffix +.PP +Normally the chunk size for multi thread transfers is set by the +backend. +However some backends such as \f[C]local\f[R] and \f[C]smb\f[R] (which +implement \f[C]OpenWriterAt\f[R] but not \f[C]OpenChunkWriter\f[R]) +don\[aq]t have a natural chunk size. +.PP +In this case the value of this option is used (default 64Mi). .SS --multi-thread-cutoff=SIZE .PP -When downloading files to the local backend above this size, rclone will -use multiple threads to download the file (default 250M). +When transferring files above SIZE to capable backends, rclone will use +multiple threads to transfer the file (default 256M). .PP -Rclone preallocates the file (using +Capable backends are marked in the +overview (https://rclone.org/overview/#optional-features) as +\f[C]MultithreadUpload\f[R]. +(They need to implement either the \f[C]OpenWriterAt\f[R] or +\f[C]OpenChunkedWriter\f[R] internal interfaces). +These include include, \f[C]local\f[R], \f[C]s3\f[R], +\f[C]azureblob\f[R], \f[C]b2\f[R], \f[C]oracleobjectstorage\f[R] and +\f[C]smb\f[R] at the time of writing. +.PP +On the local disk, rclone preallocates the file (using \f[C]fallocate(FALLOC_FL_KEEP_SIZE)\f[R] on unix or \f[C]NTSetInformationFile\f[R] on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won\[aq]t create fragmented or sparse files and there won\[aq]t be any assembly time at the end of the transfer. .PP -The number of threads used to download is controlled by +The number of threads used to transfer is controlled by \f[C]--multi-thread-streams\f[R]. .PP Use \f[C]-vv\f[R] if you wish to see info about the threads. .PP This will work with the \f[C]sync\f[R]/\f[C]copy\f[R]/\f[C]move\f[R] commands and friends \f[C]copyto\f[R]/\f[C]moveto\f[R]. -Multi thread downloads will be used with \f[C]rclone mount\f[R] and +Multi thread transfers will be used with \f[C]rclone mount\f[R] and \f[C]rclone serve\f[R] if \f[C]--vfs-cache-mode\f[R] is set to \f[C]writes\f[R] or above. .PP -\f[B]NB\f[R] that this \f[B]only\f[R] works for a local destination but -will work with any source. +\f[B]NB\f[R] that this \f[B]only\f[R] works with supported backends as +the destination but will work with any backend as the source. .PP -\f[B]NB\f[R] that multi thread copies are disabled for local to local +\f[B]NB\f[R] that multi-thread copies are disabled for local to local copies as they are faster without unless \f[C]--multi-thread-streams\f[R] is set explicitly. .PP -\f[B]NB\f[R] on Windows using multi-thread downloads will cause the -resulting files to be +\f[B]NB\f[R] on Windows using multi-thread transfers to the local disk +will cause the resulting files to be sparse (https://en.wikipedia.org/wiki/Sparse_file). Use \f[C]--local-no-sparse\f[R] to disable sparse files (which may cause -long delays at the start of downloads) or disable multi-thread downloads +long delays at the start of transfers) or disable multi-thread transfers with \f[C]--multi-thread-streams 0\f[R] .SS --multi-thread-streams=N .PP -When using multi thread downloads (see above -\f[C]--multi-thread-cutoff\f[R]) this sets the maximum number of streams -to use. -Set to \f[C]0\f[R] to disable multi thread downloads (Default 4). +When using multi thread transfers (see above +\f[C]--multi-thread-cutoff\f[R]) this sets the number of streams to use. +Set to \f[C]0\f[R] to disable multi thread transfers (Default 4). .PP -Exactly how many streams rclone uses for the download depends on the -size of the file. -To calculate the number of download streams Rclone divides the size of -the file by the \f[C]--multi-thread-cutoff\f[R] and rounds up, up to the -maximum set with \f[C]--multi-thread-streams\f[R]. -.PP -So if \f[C]--multi-thread-cutoff 250M\f[R] and -\f[C]--multi-thread-streams 4\f[R] are in effect (the defaults): -.IP \[bu] 2 -0..250 MiB files will be downloaded with 1 stream -.IP \[bu] 2 -250..500 MiB files will be downloaded with 2 streams -.IP \[bu] 2 -500..750 MiB files will be downloaded with 3 streams -.IP \[bu] 2 -750+ MiB files will be downloaded with 4 streams +If the backend has a \f[C]--backend-upload-concurrency\f[R] setting (eg +\f[C]--s3-upload-concurrency\f[R]) then this setting will be used as the +number of transfers instead if it is larger than the value of +\f[C]--multi-thread-streams\f[R] or \f[C]--multi-thread-streams\f[R] +isn\[aq]t set. .SS --no-check-dest .PP The \f[C]--no-check-dest\f[R] can be used with \f[C]move\f[R] or @@ -13901,6 +15676,8 @@ account suspended) (Fatal errors) \f[C]8\f[R] - Transfer exceeded - limit set by --max-transfer reached .IP \[bu] 2 \f[C]9\f[R] - Operation successful, but no files transferred +.IP \[bu] 2 +\f[C]10\f[R] - Duration exceeded - limit set by --max-duration reached .SS Environment Variables .PP Rclone can be configured entirely using environment variables. @@ -13941,6 +15718,9 @@ backend. To find the name of the environment variable, you need to set, take \f[C]RCLONE_CONFIG_\f[R] + name of remote + \f[C]_\f[R] + name of config file option and make it all uppercase. +Note one implication here is the remote\[aq]s name must be convertible +into a valid environment variable name, so it can only contain letters, +digits, or the \f[C]_\f[R] (underscore) character. .PP For example, to configure an S3 remote named \f[C]mys3:\f[R] without a config file (using unix ways of setting environment variables): @@ -14230,7 +16010,7 @@ does. \f[B]Important\f[R] Avoid mixing any two of \f[C]--include...\f[R], \f[C]--exclude...\f[R] or \f[C]--filter...\f[R] flags in an rclone command. -The results may not be what you expect. +The results might not be what you expect. Instead use a \f[C]--filter...\f[R] flag. .SS Patterns for matching path/file names .SS Pattern syntax @@ -14314,7 +16094,7 @@ file.jpg - matches \[dq]file.jpg\[dq] \f[R] .fi .PP -The top level of the remote may not be the top level of the drive. +The top level of the remote might not be the top level of the drive. .PP E.g. for a Microsoft Windows local directory structure @@ -14827,7 +16607,7 @@ directory \f[C]dir\f[R] and sub directories. E.g. on Microsoft Windows \f[C]rclone ls remote: --exclude \[dq]*\[rs][{JP,KR,HK}\[rs]]*\[dq]\f[R] -lists the files in \f[C]remote:\f[R] with \f[C][JP]\f[R] or +lists the files in \f[C]remote:\f[R] without \f[C][JP]\f[R] or \f[C][KR]\f[R] or \f[C][HK]\f[R] in their name. Quotes prevent the shell from interpreting the \f[C]\[rs]\f[R] characters.\f[C]\[rs]\f[R] characters escape the \f[C][\f[R] and @@ -15835,7 +17615,7 @@ If using \f[C]rclone rc\f[R] this could be passed as .IP .nf \f[C] -rclone rc operations/sync ... _config=\[aq]{\[dq]CheckSum\[dq]: true}\[aq] +rclone rc sync/sync ... _config=\[aq]{\[dq]CheckSum\[dq]: true}\[aq] \f[R] .fi .PP @@ -16381,6 +18161,29 @@ OR .fi .PP \f[B]Authentication is required for this call.\f[R] +.SS core/du: Returns disk usage of a locally attached disk. +.PP +This returns the disk usage for the local directory passed in as dir. +.PP +If the directory is not passed in, it defaults to the directory pointed +to by --cache-dir. +.IP \[bu] 2 +dir - string (optional) +.PP +Returns: +.IP +.nf +\f[C] +{ + \[dq]dir\[dq]: \[dq]/\[dq], + \[dq]info\[dq]: { + \[dq]Available\[dq]: 361769115648, + \[dq]Free\[dq]: 361785892864, + \[dq]Total\[dq]: 982141468672 + } +} +\f[R] +.fi .SS core/gc: Runs a garbage collection. .PP This tells the go runtime to do a garbage collection run. @@ -16467,6 +18270,10 @@ Returns the following values: \[dq]lastError\[dq]: last error string, \[dq]renames\[dq] : number of files renamed, \[dq]retryError\[dq]: boolean showing whether there has been at least one non-NoRetryError, + \[dq]serverSideCopies\[dq]: number of server side copies done, + \[dq]serverSideCopyBytes\[dq]: number bytes server side copied, + \[dq]serverSideMoves\[dq]: number of server side moves done, + \[dq]serverSideMoveBytes\[dq]: number bytes server side moved, \[dq]speed\[dq]: average speed in bytes per second since start of the group, \[dq]totalBytes\[dq]: total number of bytes in the group, \[dq]totalChecks\[dq]: total number of checks in the group, @@ -16701,7 +18508,9 @@ Parameters: None. .PP Results: .IP \[bu] 2 -jobids - array of integer job ids. +executeId - string id of rclone executing (change after restart) +.IP \[bu] 2 +jobids - array of integer job ids (starting at 1 on each restart) .SS job/status: Reads the status of the job ID .PP Parameters: @@ -17209,6 +19018,31 @@ See the rmdirs (https://rclone.org/commands/rclone_rmdirs/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] +.SS operations/settier: Changes storage tier or class on all files in the path +.PP +This takes the following parameters: +.IP \[bu] 2 +fs - a remote name string e.g. +\[dq]drive:\[dq] +.PP +See the settier (https://rclone.org/commands/rclone_settier/) command +for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] +.SS operations/settierfile: Changes storage tier or class on the single file pointed to +.PP +This takes the following parameters: +.IP \[bu] 2 +fs - a remote name string e.g. +\[dq]drive:\[dq] +.IP \[bu] 2 +remote - a path within that remote e.g. +\[dq]dir\[dq] +.PP +See the settierfile (https://rclone.org/commands/rclone_settierfile/) +command for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] .SS operations/size: Count the number of bytes and files in remote .PP This takes the following parameters: @@ -17488,16 +19322,25 @@ checkFilename - file name for checkAccess (default: RCLONE_TEST) maxDelete - abort sync if percentage of deleted files is above this threshold (default: 50) .IP \[bu] 2 -force - maxDelete safety check and run the sync +force - Bypass maxDelete safety check and run the sync .IP \[bu] 2 checkSync - \f[C]true\f[R] by default, \f[C]false\f[R] disables comparison of final listings, \f[C]only\f[R] will skip sync, only compare listings from the last run .IP \[bu] 2 +createEmptySrcDirs - Sync creation and deletion of empty directories. +(Not compatible with --remove-empty-dirs) +.IP \[bu] 2 removeEmptyDirs - remove empty directories at the final cleanup step .IP \[bu] 2 filtersFile - read filtering patterns from a file .IP \[bu] 2 +ignoreListingChecksum - Do not use checksums for listings +.IP \[bu] 2 +resilient - Allow future runs to retry after certain less-serious +errors, instead of requiring resync. +Use at your own risk! +.IP \[bu] 2 workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) .IP \[bu] 2 @@ -18459,6 +20302,21 @@ T}@T{ - T} T{ +Proton Drive +T}@T{ +SHA1 +T}@T{ +R/W +T}@T{ +No +T}@T{ +No +T}@T{ +R +T}@T{ +- +T} +T{ QingStor T}@T{ MD5 @@ -18474,6 +20332,21 @@ T}@T{ - T} T{ +Quatrix by Maytech +T}@T{ +- +T}@T{ +R/W +T}@T{ +No +T}@T{ +No +T}@T{ +- +T}@T{ +- +T} +T{ Seafile T}@T{ - @@ -18649,8 +20522,8 @@ This is an SHA256 sum of all the 4 MiB block SHA256s. \f[C]md5sum\f[R] or \f[C]sha1sum\f[R] as well as \f[C]echo\f[R] are in the remote\[aq]s PATH. .PP -\[S3] WebDAV supports hashes when used with Fastmail Files. -Owncloud and Nextcloud only. +\[S3] WebDAV supports hashes when used with Fastmail Files, Owncloud and +Nextcloud only. .PP \[u2074] WebDAV supports modtimes when used with Fastmail Files, Owncloud and Nextcloud only. @@ -19537,7 +21410,7 @@ Other features depend upon backend-specific capabilities. .PP .TS tab(@); -l c c c c c c c c c c. +lw(14.4n) cw(3.6n) cw(3.1n) cw(3.1n) cw(4.6n) cw(4.6n) cw(3.6n) cw(7.2n) lw(9.8n) cw(7.2n) cw(3.6n) cw(5.1n). T{ Name T}@T{ @@ -19555,6 +21428,8 @@ ListR T}@T{ StreamUpload T}@T{ +MultithreadUpload +T}@T{ LinkSharing T}@T{ About @@ -19579,6 +21454,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ No @@ -19606,6 +21483,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19629,6 +21508,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19650,6 +21531,8 @@ Yes T}@T{ Yes T}@T{ +Yes +T}@T{ No T}@T{ No @@ -19673,6 +21556,8 @@ Yes T}@T{ Yes T}@T{ +Yes +T}@T{ No T}@T{ No @@ -19694,6 +21579,8 @@ No T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19721,6 +21608,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19740,6 +21629,8 @@ No T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19767,6 +21658,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19790,6 +21683,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19814,6 +21709,8 @@ T}@T{ No T}@T{ No +T}@T{ +No T} T{ Google Drive @@ -19832,6 +21729,8 @@ Yes T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19860,6 +21759,8 @@ T}@T{ No T}@T{ No +T}@T{ +No T} T{ HDFS @@ -19880,6 +21781,8 @@ Yes T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19905,6 +21808,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19928,6 +21833,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -19947,6 +21854,8 @@ Yes T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19970,6 +21879,8 @@ Yes T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -19993,6 +21904,8 @@ No T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20016,6 +21929,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20039,6 +21954,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20067,6 +21984,8 @@ T}@T{ No T}@T{ No +T}@T{ +No T} T{ Microsoft Azure Blob Storage @@ -20085,6 +22004,8 @@ Yes T}@T{ Yes T}@T{ +Yes +T}@T{ No T}@T{ No @@ -20108,6 +22029,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20135,6 +22058,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -20156,6 +22081,8 @@ Yes T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ No @@ -20182,6 +22109,8 @@ T}@T{ No T}@T{ No +T}@T{ +No T} T{ pCloud @@ -20200,6 +22129,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20223,6 +22154,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20246,6 +22179,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20271,6 +22206,33 @@ Yes T}@T{ No T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T} +T{ +Proton Drive +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20297,6 +22259,33 @@ T}@T{ No T}@T{ No +T}@T{ +No +T} +T{ +Quatrix by Maytech +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes T} T{ Seafile @@ -20315,6 +22304,8 @@ Yes T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20340,6 +22331,8 @@ Yes T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20365,6 +22358,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T} T{ @@ -20384,6 +22379,8 @@ No T}@T{ Yes T}@T{ +Yes +T}@T{ No T}@T{ No @@ -20407,6 +22404,8 @@ No T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ No @@ -20430,6 +22429,8 @@ Yes T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ No @@ -20458,6 +22459,8 @@ T}@T{ No T}@T{ No +T}@T{ +No T} T{ WebDAV @@ -20478,6 +22481,8 @@ Yes \[dd] T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20499,6 +22504,8 @@ No T}@T{ Yes T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20524,6 +22531,8 @@ No T}@T{ No T}@T{ +No +T}@T{ Yes T}@T{ Yes @@ -20545,6 +22554,8 @@ No T}@T{ Yes T}@T{ +Yes +T}@T{ No T}@T{ Yes @@ -20618,6 +22629,12 @@ advance. This allows certain operations to work without spooling the file to local disk first, e.g. \f[C]rclone rcat\f[R]. +.SS MultithreadUpload +.PP +Some remotes allow transfers to the remote to be sent as chunks in +parallel. +If this is supported then rclone will use multi-thread copying to +transfer files much faster. .SS LinkSharing .PP Sets the necessary permissions on a file or folder and prints a link @@ -20644,182 +22661,291 @@ Most Object/Bucket-based remotes do not support this. .SH Global Flags .PP This describes the global flags available to every rclone command split -into two groups, non backend and backend flags. -.SS Non Backend Flags +into groups. +.SS Copy .PP -These flags are available for every command. +Flags for anything which can Copy a file. .IP .nf \f[C] - --ask-password Allow prompt for password for encrypted configuration (default true) - --auto-confirm If enabled, do not request console confirmation - --backup-dir string Make backups into hierarchy based in DIR - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name - --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) - --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable - --ca-cert stringArray CA certificate used to verify servers - --cache-dir string Directory rclone will use for caching (default \[dq]$HOME/.cache/rclone\[dq]) --check-first Do all the checks before starting transfers - --checkers int Number of checkers to run in parallel (default 8) - -c, --checksum Skip based on checksum (if available) & size, not mod-time & size - --client-cert string Client SSL certificate (PEM) for mutual TLS auth - --client-key string Client SSL private key (PEM) for mutual TLS auth - --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default \[dq]AUTO\[dq]) + -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --compare-dest stringArray Include additional comma separated server-side paths during comparison - --config string Config file (default \[dq]$HOME/.config/rclone/rclone.conf\[dq]) - --contimeout Duration Connect timeout (default 1m0s) --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination - --cpuprofile string Write cpu profile to file --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq]) - --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features (use --disable help to see a list) - --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. - --disable-http2 Disable HTTP/2 in the global transport - -n, --dry-run Do a trial run with no permanent changes - --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 - --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP headers - may contain sensitive info - --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) - --exclude-if-present stringArray Exclude directories if filename is present - --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) - --fast-list Use recursive list if available; uses more memory but fewer transactions - --files-from stringArray Read list of source-file names from file (use - to read from stdin) - --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) - -f, --filter stringArray Add a file filtering rule - --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) - --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) - --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) - --header stringArray Set HTTP header for all transactions - --header-download stringArray Set HTTP header for download transactions - --header-upload stringArray Set HTTP header for upload transactions - --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi - --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums - --ignore-errors Delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files --immutable Do not modify files, fail if existing files have been modified - --include stringArray Include files matching pattern - --include-from stringArray Read file include patterns from file (use - to read from stdin) --inplace Download directly to destination file instead of atomic download to temp/rename - -i, --interactive Enable interactive mode - --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default \[dq]date,time\[dq]) - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default \[dq]NOTICE\[dq]) - --log-systemd Activate systemd integration for the logger - --low-level-retries int Number of low level retries to do (default 10) - --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) - --max-depth int If set limits the recursion depth to this (default -1) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) - --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) - --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) - --memprofile string Write memory profile to file -M, --metadata If set, preserve metadata when copying objects - --metadata-exclude stringArray Exclude metadatas matching pattern - --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) - --metadata-filter stringArray Add a metadata filtering rule - --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) - --metadata-include stringArray Include metadatas matching pattern - --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) - --metadata-set stringArray Add metadata key=value when uploading - --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window Duration Max time diff to be considered the same (default 1ns) - --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi) - --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4) + --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) + --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) - --no-check-certificate Do not verify the server SSL certificate (insecure) --no-check-dest Don\[aq]t check the destination, copy regardless - --no-console Hide console window (supported on Windows only) - --no-gzip-encoding Don\[aq]t set Accept-Encoding: gzip --no-traverse Don\[aq]t traverse destination file system on copy - --no-unicode-normalization Don\[aq]t normalize unicode characters in filenames --no-update-modtime Don\[aq]t update destination mod-time if files identical --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] - --password-command SpaceSepList Command for supplying password for encrypted configuration - -P, --progress Show progress during transfer - --progress-terminal-title Show progress on the terminal title (requires -P/--progress) - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server - --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) - --rc-allow-origin string Set the allowed origin for CORS - --rc-baseurl string Prefix for URLs - leave blank for root - --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-enable-metrics Enable prometheus metrics on /metrics - --rc-files string Path to local files to serve on the HTTP server - --rc-htpasswd string A htpasswd file - if not provided no authentication is done - --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) - --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) - --rc-key string TLS PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq]) - --rc-no-auth Don\[aq]t require auth for certain methods - --rc-pass string Password for authentication - --rc-realm string Realm for authentication - --rc-salt string Password hashing salt (default \[dq]dlPL2MqE\[dq]) - --rc-serve Enable the serving of remote objects - --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) - --rc-template string User-specified template - --rc-user string User name for authentication - --rc-web-fetch-url string URL to fetch the releases for webgui (default \[dq]https://api.github.com/repos/rclone/rclone-webui-react/releases/latest\[dq]) - --rc-web-gui Launch WebGUI on localhost - --rc-web-gui-force-update Force update to latest version of web gui - --rc-web-gui-no-open-browser Don\[aq]t open the browser automatically - --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --size-only Skip based on size only, not mod-time or checksum - --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default \[dq]INFO\[dq]) - --stats-one-line Make the stats fit on one line - --stats-one-line-date Enable --stats-one-line and add current date/time prefix - --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes (\[dq]), see https://golang.org/pkg/time/#Time.Format - --stats-unit string Show data rate in stats as either \[aq]bits\[aq] or \[aq]bytes\[aq] per second (default \[dq]bytes\[dq]) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) - --suffix string Suffix to add to changed files - --suffix-keep-extension Preserve the extension when using --suffix - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default \[dq]DAEMON\[dq]) - --temp-dir string Directory rclone will use for temporary files (default \[dq]/tmp\[dq]) - --timeout Duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this - --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) - --track-renames When synchronizing, track file renames and do a server-side move if possible - --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default \[dq]hash\[dq]) - --transfers int Number of file transfers to run in parallel (default 4) -u, --update Skip files that are newer on the destination - --use-cookies Enable session cookiejar - --use-json-log Use json log format - --use-mmap Use mmap allocator (see docs) - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.63.0\[dq]) - -v, --verbose count Print lots more stuff (repeat for more) \f[R] .fi -.SS Backend Flags +.SS Sync .PP -These flags are available for every command. -They control the backends and may be set in the config file. +Flags just used for \f[C]rclone sync\f[R]. +.IP +.nf +\f[C] + --backup-dir string Make backups into hierarchy based in DIR + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --ignore-errors Delete even if there are I/O errors + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) + --suffix string Suffix to add to changed files + --suffix-keep-extension Preserve the extension when using --suffix + --track-renames When synchronizing, track file renames and do a server-side move if possible + --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default \[dq]hash\[dq]) +\f[R] +.fi +.SS Important +.PP +Important flags useful for most commands. +.IP +.nf +\f[C] + -n, --dry-run Do a trial run with no permanent changes + -i, --interactive Enable interactive mode + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Check +.PP +Flags used for \f[C]rclone check\f[R]. +.IP +.nf +\f[C] + --max-backlog int Maximum number of objects in sync or check backlog (default 10000) +\f[R] +.fi +.SS Networking +.PP +General networking and HTTP stuff. +.IP +.nf +\f[C] + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name + --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable + --ca-cert stringArray CA certificate used to verify servers + --client-cert string Client SSL certificate (PEM) for mutual TLS auth + --client-key string Client SSL private key (PEM) for mutual TLS auth + --contimeout Duration Connect timeout (default 1m0s) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. + --disable-http2 Disable HTTP/2 in the global transport + --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 + --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) + --header stringArray Set HTTP header for all transactions + --header-download stringArray Set HTTP header for download transactions + --header-upload stringArray Set HTTP header for upload transactions + --no-check-certificate Do not verify the server SSL certificate (insecure) + --no-gzip-encoding Don\[aq]t set Accept-Encoding: gzip + --timeout Duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this + --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) + --use-cookies Enable session cookiejar + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.64.0\[dq]) +\f[R] +.fi +.SS Performance +.PP +Flags helpful for increasing performance. +.IP +.nf +\f[C] + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) + --checkers int Number of checkers to run in parallel (default 8) + --transfers int Number of file transfers to run in parallel (default 4) +\f[R] +.fi +.SS Config +.PP +General configuration of rclone. +.IP +.nf +\f[C] + --ask-password Allow prompt for password for encrypted configuration (default true) + --auto-confirm If enabled, do not request console confirmation + --cache-dir string Directory rclone will use for caching (default \[dq]$HOME/.cache/rclone\[dq]) + --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default \[dq]AUTO\[dq]) + --config string Config file (default \[dq]$HOME/.config/rclone/rclone.conf\[dq]) + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --disable string Disable a comma separated list of features (use --disable help to see a list) + -n, --dry-run Do a trial run with no permanent changes + --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts + --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) + --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) + --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi + -i, --interactive Enable interactive mode + --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) + --low-level-retries int Number of low level retries to do (default 10) + --no-console Hide console window (supported on Windows only) + --no-unicode-normalization Don\[aq]t normalize unicode characters in filenames + --password-command SpaceSepList Command for supplying password for encrypted configuration + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --temp-dir string Directory rclone will use for temporary files (default \[dq]/tmp\[dq]) + --use-mmap Use mmap allocator (see docs) + --use-server-modtime Use server modified time instead of object metadata +\f[R] +.fi +.SS Debugging +.PP +Flags for developers. +.IP +.nf +\f[C] + --cpuprofile string Write cpu profile to file + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP headers - may contain sensitive info + --memprofile string Write memory profile to file +\f[R] +.fi +.SS Filter +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.SS Listing +.PP +Flags for listing directories. +.IP +.nf +\f[C] + --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) + --fast-list Use recursive list if available; uses more memory but fewer transactions +\f[R] +.fi +.SS Logging +.PP +Logging and statistics. +.IP +.nf +\f[C] + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default \[dq]date,time\[dq]) + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default \[dq]NOTICE\[dq]) + --log-systemd Activate systemd integration for the logger + --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) + -P, --progress Show progress during transfer + --progress-terminal-title Show progress on the terminal title (requires -P/--progress) + -q, --quiet Print as little stuff as possible + --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default \[dq]INFO\[dq]) + --stats-one-line Make the stats fit on one line + --stats-one-line-date Enable --stats-one-line and add current date/time prefix + --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes (\[dq]), see https://golang.org/pkg/time/#Time.Format + --stats-unit string Show data rate in stats as either \[aq]bits\[aq] or \[aq]bytes\[aq] per second (default \[dq]bytes\[dq]) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default \[dq]DAEMON\[dq]) + --use-json-log Use json log format + -v, --verbose count Print lots more stuff (repeat for more) +\f[R] +.fi +.SS Metadata +.PP +Flags to control metadata. +.IP +.nf +\f[C] + -M, --metadata If set, preserve metadata when copying objects + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --metadata-set stringArray Add metadata key=value when uploading +\f[R] +.fi +.SS RC +.PP +Flags to control the Remote Control API. +.IP +.nf +\f[C] + --rc Enable the remote control server + --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) + --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from + --rc-baseurl string Prefix for URLs - leave blank for root + --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-enable-metrics Enable prometheus metrics on /metrics + --rc-files string Path to local files to serve on the HTTP server + --rc-htpasswd string A htpasswd file - if not provided no authentication is done + --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) + --rc-key string TLS PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq]) + --rc-no-auth Don\[aq]t require auth for certain methods + --rc-pass string Password for authentication + --rc-realm string Realm for authentication + --rc-salt string Password hashing salt (default \[dq]dlPL2MqE\[dq]) + --rc-serve Enable the serving of remote objects + --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) + --rc-template string User-specified template + --rc-user string User name for authentication + --rc-web-fetch-url string URL to fetch the releases for webgui (default \[dq]https://api.github.com/repos/rclone/rclone-webui-react/releases/latest\[dq]) + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-force-update Force update to latest version of web gui + --rc-web-gui-no-open-browser Don\[aq]t open the browser automatically + --rc-web-gui-update Check and update to latest version of web gui +\f[R] +.fi +.SS Backend +.PP +Backend only flags. +These can be set in the config file also. .IP .nf \f[C] @@ -20848,8 +22974,6 @@ They control the backends and may be set in the config file. --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) - --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any @@ -20875,9 +22999,8 @@ They control the backends and may be set in the config file. --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key - --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging + --b2-upload-concurrency int Concurrency for multipart uploads (default 16) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings @@ -20889,6 +23012,7 @@ They control the backends and may be set in the config file. --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) + --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point @@ -20948,6 +23072,7 @@ They control the backends and may be set in the config file. --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default \[dq]docx,xlsx,pptx,svg\[dq]) + --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs @@ -21023,6 +23148,7 @@ They control the backends and may be set in the config file. --ftp-pass string FTP password (obscured) --ftp-port int FTP port number (default 21) --ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s) + --ftp-socks-proxy string Socks 5 proxy host --ftp-tls Use Implicit FTPS (FTP over TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) --ftp-user string FTP username (default \[dq]$USER\[dq]) @@ -21091,10 +23217,15 @@ They control the backends and may be set in the config file. --internetarchive-front-endpoint string Host of InternetArchive Frontend (default \[dq]https://archive.org\[dq]) --internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-wait-archive Duration Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish (default 0s) + --jottacloud-auth-url string Auth server URL + --jottacloud-client-id string OAuth Client Id + --jottacloud-client-secret string OAuth Client Secret --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them + --jottacloud-token string OAuth Access Token as a JSON blob + --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s (default 10Mi) --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) @@ -21115,13 +23246,18 @@ They control the backends and may be set in the config file. --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) + --mailru-auth-url string Auth server URL --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) + --mailru-client-id string OAuth Client Id + --mailru-client-secret string OAuth Client Secret --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default \[dq]*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf\[dq]) --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi) + --mailru-token string OAuth Access Token as a JSON blob + --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -21155,6 +23291,7 @@ They control the backends and may be set in the config file. --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url + --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default \[dq]\[ti]/.oci/config\[dq]) @@ -21164,7 +23301,8 @@ They control the backends and may be set in the config file. --oos-disable-checksum Don\[aq]t store MD5 checksum with object metadata --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API - --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery + --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery + --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default \[dq]env_auth\[dq]) @@ -21203,8 +23341,27 @@ They control the backends and may be set in the config file. --pikpak-trashed-only Only show files that are in the trash --pikpak-use-trash Send files to the trash instead of deleting permanently (default true) --pikpak-user string Pikpak username + --premiumizeme-auth-url string Auth server URL + --premiumizeme-client-id string OAuth Client Id + --premiumizeme-client-secret string OAuth Client Secret --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --premiumizeme-token string OAuth Access Token as a JSON blob + --premiumizeme-token-url string Token server url + --protondrive-2fa string The 2FA code + --protondrive-app-version string The app version string (default \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq]) + --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) + --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) + --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) + --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-password string The password of your proton account (obscured) + --protondrive-replace-existing-draft Create a new revision when filename conflict is detected + --protondrive-username string The username of your proton account + --putio-auth-url string Auth server URL + --putio-client-id string OAuth Client Id + --putio-client-secret string OAuth Client Secret --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --putio-token string OAuth Access Token as a JSON blob + --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) @@ -21215,6 +23372,13 @@ They control the backends and may be set in the config file. --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to + --quatrix-api-key string API key for accessing Quatrix account + --quatrix-effective-upload-time string Wanted upload time for one chunk (default \[dq]4s\[dq]) + --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --quatrix-hard-delete Delete files permanently rather than putting them into the trash + --quatrix-host string Host name of Quatrix account + --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] (default 95.367Mi) + --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets @@ -21235,8 +23399,6 @@ They control the backends and may be set in the config file. --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) - --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s) - --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it --s3-no-head If set, don\[aq]t HEAD uploaded objects to check integrity @@ -21302,14 +23464,21 @@ They control the backends and may be set in the config file. --sftp-sha1sum-command string The command used to read sha1 hashes --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files + --sftp-socks-proxy string Socks 5 proxy host + --sftp-ssh SpaceSepList Path and arguments to external ssh binary --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default \[dq]sftp\[dq]) --sftp-use-fstat If set use fstat instead of stat --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods --sftp-user string SSH username (default \[dq]$USER\[dq]) + --sharefile-auth-url string Auth server URL --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) + --sharefile-client-id string OAuth Client Id + --sharefile-client-secret string OAuth Client Secret --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder + --sharefile-token string OAuth Access Token as a JSON blob + --sharefile-token-url string Token server url --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default \[dq]http://127.0.0.1:9980\[dq]) @@ -22220,10 +24389,16 @@ Optional Flags: If exceeded, the bisync run will abort. (default: 50%) --force Bypass \[ga]--max-delete\[ga] safety check and run the sync. Consider using with \[ga]--verbose\[ga] + --create-empty-src-dirs Sync creation and deletion of empty directories. + (Not compatible with --remove-empty-dirs) --remove-empty-dirs Remove empty directories at the final cleanup step. -1, --resync Performs the resync run. Warning: Path1 files may overwrite Path2 versions. Consider using \[ga]--verbose\[ga] or \[ga]--dry-run\[ga] first. + --ignore-listing-checksum Do not use checksums for listings + (add --ignore-checksum to additionally skip post-copy checksum checks) + --resilient Allow future runs to retry after certain less-serious errors, + instead of requiring --resync. Use at your own risk! --localtime Use local time in listings (default: UTC) --no-cleanup Retain working files (useful for troubleshooting and testing). --workdir PATH Use custom working directory (useful for testing). @@ -22252,8 +24427,8 @@ Cloud references are distinguished by having a \f[C]:\f[R] in the argument (see Windows support below). .PP Path1 and Path2 are treated equally, in that neither has priority for -file changes, and access efficiency does not change whether a remote is -on Path1 or Path2. +file changes (except during \f[C]--resync\f[R]), and access efficiency +does not change whether a remote is on Path1 or Path2. .PP The listings in bisync working directory (default: \f[C]\[ti]/.cache/rclone/bisync\f[R]) are named based on the Path1 and @@ -22262,25 +24437,46 @@ the tree may be set up, e.g.: \f[C]path_to_local_tree..dropbox_subdir.lst\f[R]. .PP Any empty directories after the sync on both the Path1 and Path2 -filesystems are not deleted by default. +filesystems are not deleted by default, unless +\f[C]--create-empty-src-dirs\f[R] is specified. If the \f[C]--remove-empty-dirs\f[R] flag is specified, then both paths -will have any empty directories purged as the last step in the process. +will have ALL empty directories purged as the last step in the process. .SS Command-line flags .SS --resync .PP This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the -process will then sync the Path1 tree to Path2. +process will then copy the Path1 tree to Path2. .PP -The base directories on the both Path1 and Path2 filesystems must exist -or bisync will fail. +The \f[C]--resync\f[R] sequence is roughly equivalent to: +.IP +.nf +\f[C] +rclone copy Path2 Path1 --ignore-existing +rclone copy Path1 Path2 +\f[R] +.fi +.PP +Or, if using \f[C]--create-empty-src-dirs\f[R]: +.IP +.nf +\f[C] +rclone copy Path2 Path1 --ignore-existing +rclone copy Path1 Path2 --create-empty-src-dirs +rclone copy Path2 Path1 --create-empty-src-dirs +\f[R] +.fi +.PP +The base directories on both Path1 and Path2 filesystems must exist or +bisync will fail. This is required for safety - that bisync can verify that both paths are valid. .PP -When using \f[C]--resync\f[R], a newer version of a file either on Path1 -or Path2 filesystem, will overwrite the file on the other path (only the -last version will be kept). +When using \f[C]--resync\f[R], a newer version of a file on the Path2 +filesystem will be overwritten by the Path1 filesystem version. +(Note that this is NOT entirely +symmetrical (https://github.com/rclone/rclone/issues/5681#issuecomment-938761815).) Carefully evaluate deltas using --dry-run (https://rclone.org/flags/#non-backend-flags). .PP @@ -22300,15 +24496,29 @@ Access check files are an additional safety measure against data loss. bisync will ensure it can find matching \f[C]RCLONE_TEST\f[R] files in the same places in the Path1 and Path2 filesystems. \f[C]RCLONE_TEST\f[R] files are not generated automatically. -For \f[C]--check-access\f[R]to succeed, you must first either: -\f[B]A)\f[R] Place one or more \f[C]RCLONE_TEST\f[R] files in the Path1 -or Path2 filesystem and then do either a run without -\f[C]--check-access\f[R] or a --resync to set matching files on both -filesystems, or \f[B]B)\f[R] Set \f[C]--check-filename\f[R] to a -filename already in use in various locations throughout your sync\[aq]d -fileset. -Time stamps and file contents are not important, just the names and -locations. +For \f[C]--check-access\f[R] to succeed, you must first either: +\f[B]A)\f[R] Place one or more \f[C]RCLONE_TEST\f[R] files in both +systems, or \f[B]B)\f[R] Set \f[C]--check-filename\f[R] to a filename +already in use in various locations throughout your sync\[aq]d fileset. +Recommended methods for \f[B]A)\f[R] include: * +\f[C]rclone touch Path1/RCLONE_TEST\f[R] (create a new file) * +\f[C]rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST\f[R] (copy an +existing file) * +\f[C]rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include \[dq]RCLONE_TEST\[dq]\f[R] +(copy multiple files at once, recursively) * create the files manually +(outside of rclone) * run \f[C]bisync\f[R] once \f[I]without\f[R] +\f[C]--check-access\f[R] to set matching files on both filesystems will +also work, but is not preferred, due to potential for user error (you +are temporarily disabling the safety feature). +.PP +Note that \f[C]--check-access\f[R] is still enforced on +\f[C]--resync\f[R], so \f[C]bisync --resync --check-access\f[R] will not +work as a method of initially setting the files (this is to ensure that +bisync can\[aq]t inadvertently circumvent its own safety +switch (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should).) +.PP +Time stamps and file contents for \f[C]RCLONE_TEST\f[R] files are not +important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place \f[C]RCLONE_TEST\f[R] files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree @@ -22335,7 +24545,7 @@ new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. -To force the sync either set a different delete percentage limit, e.g. +To force the sync, either set a different delete percentage limit, e.g. \f[C]--max-delete 75\f[R] (allows up to 75% deletion), or use \f[C]--force\f[R] to bypass the check. .PP @@ -22352,19 +24562,19 @@ synching with Dropbox. .PP If you make changes to your filters file then bisync requires a run with \f[C]--resync\f[R]. -This is a safety feature, which avoids existing files on the Path1 +This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real. .PP -To block this from happening bisync calculates an MD5 hash of the +To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a \f[C].md5\f[R] file in the same place as your filters file. -On the next runs with \f[C]--filters-file\f[R] set, bisync re-calculates +On the next run with \f[C]--filters-file\f[R] set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash -stored in \f[C].md5\f[R] file. -If they don\[aq]t match the run aborts with a critical error and thus +stored in the \f[C].md5\f[R] file. +If they don\[aq]t match, the run aborts with a critical error and thus forces you to do a \f[C]--resync\f[R], likely avoiding a disaster. .SS --check-sync .PP @@ -22386,6 +24596,80 @@ reduce the sync run times for very large numbers of files. The check may be run manually with \f[C]--check-sync=only\f[R]. It runs only the integrity check and terminates without actually synching. +.PP +See also: Concurrent modifications +.SS --ignore-listing-checksum +.PP +By default, bisync will retrieve (or generate) checksums (for backends +that support them) when creating the listings for both paths, and store +the checksums in the listing files. +\f[C]--ignore-listing-checksum\f[R] will disable this behavior, which +may speed things up considerably, especially on backends (such as +local (https://rclone.org/local/)) where hashes must be computed on the +fly instead of retrieved. +Please note the following: +.IP \[bu] 2 +While checksums are (by default) generated and stored in the listing +files, they are NOT currently used for determining diffs (deltas). +It is anticipated that full checksum support will be added in a future +version. +.IP \[bu] 2 +\f[C]--ignore-listing-checksum\f[R] is NOT the same as +\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), +and you may wish to use one or the other, or both. +In a nutshell: \f[C]--ignore-listing-checksum\f[R] controls whether +checksums are considered when scanning for diffs, while +\f[C]--ignore-checksum\f[R] controls whether checksums are considered +during the copy/sync operations that follow, if there ARE diffs. +.IP \[bu] 2 +Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently +computes hashes for one path \f[I]even when there\[aq]s no common hash +with the other path\f[R] (for example, a +crypt (https://rclone.org/crypt/#modified-time-and-hashes) remote.) +.IP \[bu] 2 +If both paths support checksums and have a common hash, AND +\f[C]--ignore-listing-checksum\f[R] was not specified when creating the +listings, \f[C]--check-sync=only\f[R] can be used to compare Path1 vs. +Path2 checksums (as of the time the previous listings were created.) +However, \f[C]--check-sync=only\f[R] will NOT include checksums if the +previous listings were generated on a run using +\f[C]--ignore-listing-checksum\f[R]. +For a more robust integrity check of the current state, consider using +\f[C]check\f[R] (or +\f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/), +if at least one path is a \f[C]crypt\f[R] remote.) +.SS --resilient +.PP +\f[B]\f[BI]Caution: this is an experimental feature. Use at your own +risk!\f[B]\f[R] +.PP +By default, most errors or interruptions will cause bisync to abort and +require \f[C]--resync\f[R] to recover. +This is a safety feature, to prevent bisync from running again until a +user checks things out. +However, in some cases, bisync can go too far and enforce a lockout when +one isn\[aq]t actually necessary, like for certain less-serious errors +that might resolve themselves on the next run. +When \f[C]--resilient\f[R] is specified, bisync tries its best to +recover and self-correct, and only requires \f[C]--resync\f[R] as a last +resort when a human\[aq]s involvement is absolutely necessary. +The intended use case is for running bisync as a background process +(such as via scheduled cron). +.PP +When using \f[C]--resilient\f[R] mode, bisync will still report the +error and abort, however it will not lock out future runs -- allowing +the possibility of retrying at the next normally scheduled time, without +requiring a \f[C]--resync\f[R] first. +Examples of such retryable errors include access test failures, missing +listing files, and filter change detections. +These safety features will still prevent the \f[I]current\f[R] run from +proceeding -- the difference is that if conditions have improved by the +time of the \f[I]next\f[R] run, that next run will be allowed to +proceed. +Certain more serious errors will still enforce a \f[C]--resync\f[R] +lockout, even in \f[C]--resilient\f[R] mode, to prevent data loss. +.PP +Behavior of \f[C]--resilient\f[R] may change in a future version. .SS Operation .SS Runtime flow details .PP @@ -22521,9 +24805,20 @@ Implementation T} _ T{ +Path1 new/changed AND Path2 new/changed AND Path1 == Path2 +T}@T{ +File is new/changed on Path1 AND new/changed on Path2 AND Path1 version +is currently identical to Path2 +T}@T{ +No change +T}@T{ +None +T} +T{ Path1 new AND Path2 new T}@T{ -File is new on Path1 AND new on Path2 +File is new on Path1 AND new on Path2 (and Path1 version is NOT +identical to Path2) T}@T{ Files renamed to _Path1 and _Path2 T}@T{ @@ -22533,7 +24828,8 @@ T} T{ Path2 newer AND Path1 changed T}@T{ -File is newer on Path2 AND also changed (newer/older/size) on Path1 +File is newer on Path2 AND also changed (newer/older/size) on Path1 (and +Path1 version is NOT identical to Path2) T}@T{ Files renamed to _Path1 and _Path2 T}@T{ @@ -22568,9 +24864,24 @@ T}@T{ \f[C]rclone copy\f[R] Path2 to Path1 T} .TE +.PP +As of \f[C]rclone v1.64\f[R], bisync is now better at detecting +\f[I]false positive\f[R] sync conflicts, which would previously have +resulted in unnecessary renames and duplicates. +Now, when bisync comes to a file that it wants to rename (because it is +new/changed on both sides), it first checks whether the Path1 and Path2 +versions are currently \f[I]identical\f[R] (using the same underlying +function as \f[C]check\f[R].) If bisync concludes that the files are +identical, it will skip them and move on. +Otherwise, it will create renamed \f[C]..Path1\f[R] and +\f[C]..Path2\f[R] duplicates, as before. +This behavior also improves the experience of renaming +directories (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories), +as a \f[C]--resync\f[R] is no longer required, so long as the same +change has been made on both sides. .SS All files changed check .PP -if \f[I]all\f[R] prior existing files on either of the filesystems have +If \f[I]all\f[R] prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system\[aq]s timezone) then bisync will abort without making any changes. @@ -22611,7 +24922,7 @@ It is recommended to use \f[C]--resync --dry-run --verbose\f[R] initially and \f[I]carefully\f[R] review what changes will be made before running the \f[C]--resync\f[R] without \f[C]--dry-run\f[R]. .PP -Most of these events come up due to a error status from an internal +Most of these events come up due to an error status from an internal call. On such a critical error the \f[C]{...}.path1.lst\f[R] and \f[C]{...}.path2.lst\f[R] listing files are renamed to extension @@ -22624,6 +24935,8 @@ Linux. Some errors are considered temporary and re-running the bisync is not blocked. The \f[I]critical return\f[R] blocks further bisync runs. +.PP +See also: \f[C]--resilient\f[R] .SS Lock file .PP When bisync is running, a lock file is created in the bisync working @@ -22660,7 +24973,7 @@ list. Run the test suite to check for proper operation as described below. .PP First release of \f[C]rclone bisync\f[R] requires that underlying -backend supported the modification time feature and will refuse to run +backend supports the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future \f[C]rclone bisync\f[R] release. @@ -22677,41 +24990,127 @@ moment. Files that \f[B]change during\f[R] a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. -The solution is to sync at quiet times or filter out unnecessary -directories and files. -.SS Empty directories +The currently recommended solution is to sync at quiet times or filter +out unnecessary directories and files. .PP -New empty directories on one path are \f[I]not\f[R] propagated to the -other side. -This is because bisync (and rclone) natively works on files not -directories. -The following sequence is a workaround but will not propagate the delete -of an empty directory to the other side: +As an alternative +approach (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=scans%2C%20to%20avoid-,errors%20if%20files%20changed%20during%20sync,-Given%20the%20number), +consider using \f[C]--check-sync=false\f[R] (and possibly +\f[C]--resilient\f[R]) to make bisync more forgiving of filesystems that +change during the sync. +Be advised that this may cause bisync to miss events that occur during a +bisync run, so it is a good idea to supplement this with a periodic +independent integrity check, and corrective sync if diffs are found. +For example, a possible sequence could look like this: +.IP "1." 3 +Normally scheduled bisync run: .IP .nf \f[C] -rclone bisync PATH1 PATH2 -rclone copy PATH1 PATH2 --filter \[dq]+ */\[dq] --filter \[dq]- **\[dq] --create-empty-src-dirs -rclone copy PATH2 PATH2 --filter \[dq]+ */\[dq] --filter \[dq]- **\[dq] --create-empty-src-dirs +rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient \f[R] .fi +.IP "2." 3 +Periodic independent integrity check (perhaps scheduled nightly or +weekly): +.IP +.nf +\f[C] +rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt +\f[R] +.fi +.IP "3." 3 +If diffs are found, you have some choices to correct them. +If one side is more up-to-date and you want to make the other side match +it, you could run: +.IP +.nf +\f[C] +rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v +\f[R] +.fi +.PP +(or switch Path1 and Path2 to make Path2 the source-of-truth) +.PP +Or, if neither side is totally up-to-date, you could run a +\f[C]--resync\f[R] to bring them back into agreement (but remember that +this could cause deleted files to re-appear.) +.PP +*Note also that \f[C]rclone check\f[R] does not currently include empty +directories, so if you want to know if any empty directories are out of +sync, consider alternatively running the above \f[C]rclone sync\f[R] +command with \f[C]--dry-run\f[R] added. +.SS Empty directories +.PP +By default, new/deleted empty directories on one path are \f[I]not\f[R] +propagated to the other side. +This is because bisync (and rclone) natively works on files, not +directories. +However, this can be changed with the \f[C]--create-empty-src-dirs\f[R] +flag, which works in much the same way as in +\f[C]sync\f[R] (https://rclone.org/commands/rclone_sync/) and +\f[C]copy\f[R] (https://rclone.org/commands/rclone_copy/). +When used, empty directories created or deleted on one side will also be +created or deleted on the other side. +The following should be noted: * \f[C]--create-empty-src-dirs\f[R] is +not compatible with \f[C]--remove-empty-dirs\f[R]. +Use only one or the other (or neither). +* It is not recommended to switch back and forth between +\f[C]--create-empty-src-dirs\f[R] and the default (no +\f[C]--create-empty-src-dirs\f[R]) without running \f[C]--resync\f[R]. +This is because it may appear as though all directories (not just the +empty ones) were created/deleted, when actually you\[aq]ve just toggled +between making them visible/invisible to bisync. +It looks scarier than it is, but it\[aq]s still probably best to stick +to one or the other, and use \f[C]--resync\f[R] when you need to switch. .SS Renamed directories .PP -Renaming a folder on the Path1 side results is deleting all files on the +Renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. -Similarly, renaming a directory on both sides to the same name will -result in creating \f[C]..path1\f[R] and \f[C]..path2\f[R] files on both -sides. -Currently the most effective and efficient method of renaming a -directory is to rename it on both sides, then do a \f[C]--resync\f[R]. +Currently, the most effective and efficient method of renaming a +directory is to rename it to the same name on both sides. +(As of \f[C]rclone v1.64\f[R], a \f[C]--resync\f[R] is no longer +required after doing so, as bisync will automatically detect that Path1 +and Path2 are in agreement.) +.SS \f[C]--fast-list\f[R] used by default +.PP +Unlike most other rclone commands, bisync uses +\f[C]--fast-list\f[R] (https://rclone.org/docs/#fast-list) by default, +for backends that support it. +In many cases this is desirable, however, there are some scenarios in +which bisync could be faster \f[I]without\f[R] \f[C]--fast-list\f[R], +and there is also a known issue concerning Google Drive users with many +empty +directories (https://github.com/rclone/rclone/commit/cbf3d4356135814921382dd3285d859d15d0aa77). +For now, the recommended way to avoid using \f[C]--fast-list\f[R] is to +add \f[C]--disable ListR\f[R] to all bisync commands. +The default behavior may change in a future version. +.SS Overridden Configs +.PP +When rclone detects an overridden config, it adds a suffix like +\f[C]{ABCDE}\f[R] on the fly to the internal name of the remote. +Bisync follows suit by including this suffix in its listing filenames. +However, this suffix does not necessarily persist from run to run, +especially if different flags are provided. +So if next time the suffix assigned is \f[C]{FGHIJ}\f[R], bisync will +get confused, because it\[aq]s looking for a listing file with +\f[C]{FGHIJ}\f[R], when the file it wants has \f[C]{ABCDE}\f[R]. +As a result, it throws +\f[C]Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run\f[R] +and refuses to run again until the user runs a \f[C]--resync\f[R] +(unless using \f[C]--resilient\f[R]). +The best workaround at the moment is to set any backend-specific flags +in the config file (https://rclone.org/commands/rclone_config/) instead +of specifying them with command flags. +(You can still override them as needed for other rclone commands.) .SS Case sensitivity .PP Synching with \f[B]case-insensitive\f[R] filesystems, such as Windows or \f[C]Box\f[R], can result in file name conflicts. This will be fixed in a future release. -The near term workaround is to make sure that files on both sides +The near-term workaround is to make sure that files on both sides don\[aq]t have spelling case differences (\f[C]Smile.jpg\f[R] vs. \f[C]smile.jpg\f[R]). .SS Windows support @@ -22787,7 +25186,7 @@ Specific files may also be excluded, as with the Dropbox exclusions example below. .RE .IP "2." 3 -Decide if its easier (or cleaner) to: +Decide if it\[aq]s easier (or cleaner) to: .RS 4 .IP \[bu] 2 Include select directories and therefore \f[I]exclude everything @@ -22827,7 +25226,7 @@ For example: \f[C]-/Desktop/tempfiles/\f[R], or \[ga]- /testdir/\f[C]. Again, a\f[R]**\[ga] on the end is not necessary. .IP \[bu] 2 Do \f[I]not\f[R] add a \[ga]- **\[ga] in the file. -Without this line, everything will be included that has not be +Without this line, everything will be included that has not been explicitly excluded. .IP \[bu] 2 Disregard step 3. @@ -23002,7 +25401,7 @@ delete some files. For example, if a file is new on Path2 and does not exist on Path1 then it would normally be copied to Path1, but with \f[C]--dry-run\f[R] enabled those copies don\[aq]t happen, which leads to the attempted -delete on the Path2, blocked again by --dry-run: +delete on Path2, blocked again by --dry-run: \f[C]... Not deleting as --dry-run\f[R]. .PP This whole confusing situation is an artifact of the \f[C]--dry-run\f[R] @@ -23012,16 +25411,16 @@ been copied to Path1 then the threatened deletes on Path2 may be disregarded. .SS Retries .PP -Rclone has built in retries. +Rclone has built-in retries. If you run with \f[C]--verbose\f[R] you\[aq]ll see error and retry messages such as shown below. This is usually not a bug. -If at the end of the run you see \f[C]Bisync successful\f[R] and not +If at the end of the run, you see \f[C]Bisync successful\f[R] and not \f[C]Bisync critical error\f[R] or \f[C]Bisync aborted\f[R] then the run was successful, and you can ignore the error messages. .PP The following run shows an intermittent fail. -Lines \f[I]5\f[R] and _6- are low level messages. +Lines \f[I]5\f[R] and _6- are low-level messages. Line \f[I]6\f[R] is a bubbled-up \f[I]warning\f[R] message, conveying the error. Rclone normally retries failing commands, so there may be numerous such @@ -23114,7 +25513,7 @@ and an OwnCloud server, with output logged to a runlog file: .fi .PP See crontab -syntax (https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES)). +syntax (https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES) for the details of crontab time interval expressions. .PP If you run \f[C]rclone bisync\f[R] as a cron job, redirect stdout/stderr @@ -23340,7 +25739,7 @@ file mismatches in the test tree. .IP \[bu] 2 Some Dropbox tests can fail, notably printing the following message: \f[C]src and dst identical but can\[aq]t set mod time without deleting and re-uploading\f[R] -This is expected and happens due a way Dropbox handles modification +This is expected and happens due to the way Dropbox handles modification times. You should use the \f[C]-refresh-times\f[R] test flag to make up for this. @@ -23578,12 +25977,231 @@ DavideRossi/upback (https://github.com/DavideRossi/upback) .PP Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. -See the \f[I]Dual Shadow Method\f[R] section in the Neil Fraser\[aq]s +See the \f[I]Dual Shadow Method\f[R] section in Neil Fraser\[aq]s article (https://neil.fraser.name/writing/sync/). .PP Also note a number of academic publications by Benjamin Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization) about \f[I]Unison\f[R] and synchronization in general. +.SS Changelog +.SS \f[C]v1.64\f[R] +.IP \[bu] 2 +Fixed an +issue (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Dry%20runs%20are%20not%20completely%20dry) +causing dry runs to inadvertently commit filter changes +.IP \[bu] 2 +Fixed an +issue (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20%2D%2Dresync%20deletes%20data%2C%20contrary%20to%20docs) +causing \f[C]--resync\f[R] to erroneously delete empty folders and +duplicate files unique to Path2 +.IP \[bu] 2 +\f[C]--check-access\f[R] is now enforced during \f[C]--resync\f[R], +preventing data loss in certain user error +scenarios (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should) +.IP \[bu] 2 +Fixed an +issue (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Bisync%20reads%20files%20in%20excluded%20directories%20during%20delete%20operations) +causing bisync to consider more files than necessary due to overbroad +filters during delete operations +.IP \[bu] 2 +Improved detection of false positive change +conflicts (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Identical%20files%20should%20be%20left%20alone%2C%20even%20if%20new/newer/changed%20on%20both%20sides) +(identical files are now left alone instead of renamed) +.IP \[bu] 2 +Added support for +\f[C]--create-empty-src-dirs\f[R] (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20Bisync%20should%20create/delete%20empty%20directories%20as%20sync%20does%2C%20when%20%2D%2Dcreate%2Dempty%2Dsrc%2Ddirs%20is%20passed) +.IP \[bu] 2 +Added experimental \f[C]--resilient\f[R] mode to allow recovery from +self-correctable +errors (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20Bisync%20should%20be%20more%20resilient%20to%20self%2Dcorrectable%20errors) +.IP \[bu] 2 +Added new \f[C]--ignore-listing-checksum\f[R] +flag (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20%2D%2Dignore%2Dchecksum%20should%20be%20split%20into%20two%20flags%20for%20separate%20purposes) +to distinguish from \f[C]--ignore-checksum\f[R] +.IP \[bu] 2 +Performance +improvements (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20Deletes%20take%20several%20times%20longer%20than%20copies) +for large remotes +.IP \[bu] 2 +Documentation and testing improvements +.SH Release signing +.PP +The hashes of the binary artefacts of the rclone release are signed with +a public PGP/GPG key. +This can be verified manually as described below. +.PP +The same mechanism is also used by rclone +selfupdate (https://rclone.org/commands/rclone_selfupdate/) to verify +that the release has not been tampered with before the new update is +installed. +This checks the SHA256 hash and the signature with a public key compiled +into the rclone binary. +.SS Release signing key +.PP +You may obtain the release signing key from: +.IP \[bu] 2 +From KEYS on this website - this file contains all past signing keys +also. +.IP \[bu] 2 +The git repository hosted on GitHub - +https://github.com/rclone/rclone/blob/master/docs/content/KEYS +.IP \[bu] 2 +\f[C]gpg --keyserver hkps://keys.openpgp.org --search nick\[at]craig-wood.com\f[R] +.IP \[bu] 2 +\f[C]gpg --keyserver hkps://keyserver.ubuntu.com --search nick\[at]craig-wood.com\f[R] +.IP \[bu] 2 +https://www.craig-wood.com/nick/pub/pgp-key.txt +.PP +After importing the key, verify that the fingerprint of one of the keys +matches: \f[C]FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA\f[R] as this key +is used for signing. +.PP +We recommend that you cross-check the fingerprint shown above through +the domains listed below. +By cross-checking the integrity of the fingerprint across multiple +domains you can be confident that you obtained the correct key. +.IP \[bu] 2 +The source for this page on +GitHub (https://github.com/rclone/rclone/blob/master/docs/content/release_signing.md). +.IP \[bu] 2 +Through DNS \f[C]dig key.rclone.org txt\f[R] +.PP +If you find anything that doesn\[aq]t not match, please contact the +developers at once. +.SS How to verify the release +.PP +In the release directory you will see the release files and some files +called \f[C]MD5SUMS\f[R], \f[C]SHA1SUMS\f[R] and \f[C]SHA256SUMS\f[R]. +.IP +.nf +\f[C] +$ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http: +MD5SUMS +SHA1SUMS +SHA256SUMS +rclone-v1.63.1-freebsd-386.zip +rclone-v1.63.1-freebsd-amd64.zip +\&... +rclone-v1.63.1-windows-arm64.zip +rclone-v1.63.1.tar.gz +version.txt +\f[R] +.fi +.PP +The \f[C]MD5SUMS\f[R], \f[C]SHA1SUMS\f[R] and \f[C]SHA256SUMS\f[R] +contain hashes of the binary files in the release directory along with a +signature. +.PP +For example: +.IP +.nf +\f[C] +$ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS +-----BEGIN PGP SIGNED MESSAGE----- +Hash: SHA1 + +f6d1b2d7477475ce681bdce8cb56f7870f174cb6b2a9ac5d7b3764296ea4a113 rclone-v1.63.1-freebsd-386.zip +7266febec1f01a25d6575de51c44ddf749071a4950a6384e4164954dff7ac37e rclone-v1.63.1-freebsd-amd64.zip +\&... +66ca083757fb22198309b73879831ed2b42309892394bf193ff95c75dff69c73 rclone-v1.63.1-windows-amd64.zip +bbb47c16882b6c5f2e8c1b04229378e28f68734c613321ef0ea2263760f74cd0 rclone-v1.63.1-windows-arm64.zip +-----BEGIN PGP SIGNATURE----- + +iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZLVKJQAKCRCTk14C/ztU ++pZuAJ0XJ+QWLP/3jCtkmgcgc4KAwd/rrwCcCRZQ7E+oye1FPY46HOVzCFU3L7g= +=8qrL +-----END PGP SIGNATURE----- +\f[R] +.fi +.SS Download the files +.PP +The first step is to download the binary and SUMs file and verify that +the SUMs you have downloaded match. +Here we download \f[C]rclone-v1.63.1-windows-amd64.zip\f[R] - choose the +binary (or binaries) appropriate to your architecture. +We\[aq]ve also chosen the \f[C]SHA256SUMS\f[R] as these are the most +secure. +You could verify the other types of hash also for extra security. +\f[C]rclone selfupdate\f[R] verifies just the \f[C]SHA256SUMS\f[R]. +.IP +.nf +\f[C] +$ mkdir /tmp/check +$ cd /tmp/check +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . +$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . +\f[R] +.fi +.SS Verify the signatures +.PP +First verify the signatures on the SHA256 file. +.PP +Import the key. +See above for ways to verify this key is correct. +.IP +.nf +\f[C] +$ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: key 93935E02FF3B54FA: public key \[dq]Nick Craig-Wood \[dq] imported +gpg: Total number processed: 1 +gpg: imported: 1 +\f[R] +.fi +.PP +Then check the signature: +.IP +.nf +\f[C] +$ gpg --verify SHA256SUMS +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST +gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: Good signature from \[dq]Nick Craig-Wood \[dq] [ultimate] +\f[R] +.fi +.PP +Verify the signature was good and is using the fingerprint shown above. +.PP +Repeat for \f[C]MD5SUMS\f[R] and \f[C]SHA1SUMS\f[R] if desired. +.SS Verify the hashes +.PP +Now that we know the signatures on the hashes are OK we can verify the +binaries match the hashes, completing the verification. +.IP +.nf +\f[C] +$ sha256sum -c SHA256SUMS 2>&1 | grep OK +rclone-v1.63.1-windows-amd64.zip: OK +\f[R] +.fi +.PP +Or do the check with rclone +.IP +.nf +\f[C] +$ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 49 +2023/09/11 10:53:58 NOTICE: SHA256SUMS: 4 warning(s) suppressed... += rclone-v1.63.1-windows-amd64.zip +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 0 differences found +2023/09/11 10:53:58 NOTICE: Local file system at /tmp/check: 1 matching files +\f[R] +.fi +.SS Verify signatures and hashes together +.PP +You can verify the signatures and hashes in one command line like this: +.IP +.nf +\f[C] +$ gpg --decrypt SHA256SUMS | sha256sum -c --ignore-missing +gpg: Signature made Mon 17 Jul 2023 15:03:17 BST +gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA +gpg: Good signature from \[dq]Nick Craig-Wood \[dq] [ultimate] +gpg: aka \[dq]Nick Craig-Wood \[dq] [unknown] +rclone-v1.63.1-windows-amd64.zip: OK +\f[R] +.fi .SH 1Fichier .PP This is a backend for the 1fichier (https://1fichier.com) cloud storage @@ -24449,6 +27067,8 @@ IDrive e2 .IP \[bu] 2 IONOS Cloud .IP \[bu] 2 +Leviia Object Storage +.IP \[bu] 2 Liara Object Storage .IP \[bu] 2 Minio @@ -24469,6 +27089,8 @@ StackPath .IP \[bu] 2 Storj .IP \[bu] 2 +Synology C2 Object Storage +.IP \[bu] 2 Tencent Cloud Object Storage (COS) .IP \[bu] 2 Wasabi @@ -24970,6 +27592,23 @@ $ rclone -q --s3-versions ls s3:cleanup-test 9 one.txt \f[R] .fi +.SS Versions naming caveat +.PP +When using \f[C]--s3-versions\f[R] flag rclone is relying on the file +name to work out whether the objects are versions or not. +Versions\[aq] names are created by inserting timestamp between file name +and its extension. +.IP +.nf +\f[C] + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt +\f[R] +.fi +.PP +If there are real files present with the same names as versions, then +behaviour of \f[C]--s3-versions\f[R] can be unpredictable. .SS Cleanup .PP If you run \f[C]rclone cleanup s3:bucket\f[R] then it will remove all @@ -25252,9 +27891,9 @@ all the files to be uploaded as multipart. Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, -IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, -RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and -Wasabi). +IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, +Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, +Tencent COS, Qiniu and Wasabi). .SS --s3-provider .PP Choose your S3 provider. @@ -25356,6 +27995,12 @@ IONOS Cloud Seagate Lyve Cloud .RE .IP \[bu] 2 +\[dq]Leviia\[dq] +.RS 2 +.IP \[bu] 2 +Leviia Object Storage +.RE +.IP \[bu] 2 \[dq]Liara\[dq] .RS 2 .IP \[bu] 2 @@ -25410,6 +28055,12 @@ StackPath Object Storage Storj (S3 Compatible Gateway) .RE .IP \[bu] 2 +\[dq]Synology\[dq] +.RS 2 +.IP \[bu] 2 +Synology C2 Object Storage +.RE +.IP \[bu] 2 \[dq]TencentCOS\[dq] .RS 2 .IP \[bu] 2 @@ -26191,6 +28842,55 @@ South America (S\[~a]o Paulo) .RE .SS --s3-region .PP +Region where your data stored. +.PP +Properties: +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_S3_REGION +.IP \[bu] 2 +Provider: Synology +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]eu-001\[dq] +.RS 2 +.IP \[bu] 2 +Europe Region 1 +.RE +.IP \[bu] 2 +\[dq]eu-002\[dq] +.RS 2 +.IP \[bu] 2 +Europe Region 2 +.RE +.IP \[bu] 2 +\[dq]us-001\[dq] +.RS 2 +.IP \[bu] 2 +US Region 1 +.RE +.IP \[bu] 2 +\[dq]us-002\[dq] +.RS 2 +.IP \[bu] 2 +US Region 2 +.RE +.IP \[bu] 2 +\[dq]tw-001\[dq] +.RS 2 +.IP \[bu] 2 +Asia (Taiwan) +.RE +.RE +.SS --s3-region +.PP Region to connect to. .PP Leave blank if you are using an S3 clone and you don\[aq]t have a @@ -26203,7 +28903,7 @@ Config: region Env Var: RCLONE_S3_REGION .IP \[bu] 2 Provider: -!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive +!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -26973,6 +29673,33 @@ South America (S\[~a]o Paulo) .RE .SS --s3-endpoint .PP +Endpoint for Leviia Object Storage API. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Provider: Leviia +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]s3.leviia.com\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint +.IP \[bu] 2 +Leviia +.RE +.RE +.SS --s3-endpoint +.PP Endpoint for Liara Object Storage API. .PP Properties: @@ -27402,6 +30129,55 @@ Global Hosted Gateway .RE .SS --s3-endpoint .PP +Endpoint for Synology C2 Object Storage API. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Provider: Synology +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]eu-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +EU Endpoint 1 +.RE +.IP \[bu] 2 +\[dq]eu-002.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +EU Endpoint 2 +.RE +.IP \[bu] 2 +\[dq]us-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +US Endpoint 1 +.RE +.IP \[bu] 2 +\[dq]us-002.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +US Endpoint 2 +.RE +.IP \[bu] 2 +\[dq]tw-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +TW Endpoint 1 +.RE +.RE +.SS --s3-endpoint +.PP Endpoint for Tencent COS API. .PP Properties: @@ -27740,7 +30516,7 @@ Config: endpoint Env Var: RCLONE_S3_ENDPOINT .IP \[bu] 2 Provider: -!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,RackCorp,Qiniu,Petabox +!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -28742,7 +31518,7 @@ Config: location_constraint Env Var: RCLONE_S3_LOCATION_CONSTRAINT .IP \[bu] 2 Provider: -!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox +!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -28769,7 +31545,7 @@ Config: acl .IP \[bu] 2 Env Var: RCLONE_S3_ACL .IP \[bu] 2 -Provider: !Storj,Cloudflare +Provider: !Storj,Synology,Cloudflare .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -29310,9 +32086,9 @@ Deep archive storage mode Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, -IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, -RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and -Wasabi). +IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, +Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, +Tencent COS, Qiniu and Wasabi). .SS --s3-bucket-acl .PP Canned ACL used when creating buckets. @@ -29942,11 +32718,7 @@ Default: Slash,InvalidUtf8,Dot .SS --s3-memory-pool-flush-time .PP How often internal memory buffer pools will be flushed. -.PP -Uploads which requires additional buffers (f.e multipart) will use -memory pool for allocations. -This option controls how often unused buffers will be removed from the -pool. +(no longer used) .PP Properties: .IP \[bu] 2 @@ -29960,6 +32732,7 @@ Default: 1m0s .SS --s3-memory-pool-use-mmap .PP Whether to use mmap buffers in internal memory pool. +(no longer used) .PP Properties: .IP \[bu] 2 @@ -30364,9 +33137,9 @@ Usage Examples: .IP .nf \f[C] -rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS] -rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS] -rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS] +rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS \f[R] .fi .PP @@ -30375,7 +33148,7 @@ Test first with --interactive/-i or --dry-run flags .IP .nf \f[C] -rclone --interactive backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard +rclone --interactive backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard -o lifetime=1 \f[R] .fi .PP @@ -30383,7 +33156,7 @@ All the objects shown will be marked for restore, then .IP .nf \f[C] -rclone backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard +rclone backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o priority=Standard -o lifetime=1 \f[R] .fi .PP @@ -30395,11 +33168,11 @@ The Status will be OK if it was successful or an error message if not. [ { \[dq]Status\[dq]: \[dq]OK\[dq], - \[dq]Path\[dq]: \[dq]test.txt\[dq] + \[dq]Remote\[dq]: \[dq]test.txt\[dq] }, { \[dq]Status\[dq]: \[dq]OK\[dq], - \[dq]Path\[dq]: \[dq]test/file4.txt\[dq] + \[dq]Remote\[dq]: \[dq]test/file4.txt\[dq] } ] \f[R] @@ -30412,6 +33185,63 @@ Options: \[dq]lifetime\[dq]: Lifetime of the active copy in days .IP \[bu] 2 \[dq]priority\[dq]: Priority of restore: Standard|Expedited|Bulk +.SS restore-status +.PP +Show the restore status for objects being restored from GLACIER to +normal storage +.IP +.nf +\f[C] +rclone backend restore-status remote: [options] [+] +\f[R] +.fi +.PP +This command can be used to show the status for objects being restored +from GLACIER to normal storage. +.PP +Usage Examples: +.IP +.nf +\f[C] +rclone backend restore-status s3:bucket/path/to/object +rclone backend restore-status s3:bucket/path/to/directory +rclone backend restore-status -o all s3:bucket/path/to/directory +\f[R] +.fi +.PP +This command does not obey the filters. +.PP +It returns a list of status dictionaries. +.IP +.nf +\f[C] +[ + { + \[dq]Remote\[dq]: \[dq]file.txt\[dq], + \[dq]VersionID\[dq]: null, + \[dq]RestoreStatus\[dq]: { + \[dq]IsRestoreInProgress\[dq]: true, + \[dq]RestoreExpiryDate\[dq]: \[dq]2023-09-06T12:29:19+01:00\[dq] + }, + \[dq]StorageClass\[dq]: \[dq]GLACIER\[dq] + }, + { + \[dq]Remote\[dq]: \[dq]test.pdf\[dq], + \[dq]VersionID\[dq]: null, + \[dq]RestoreStatus\[dq]: { + \[dq]IsRestoreInProgress\[dq]: false, + \[dq]RestoreExpiryDate\[dq]: \[dq]2023-09-06T12:29:19+01:00\[dq] + }, + \[dq]StorageClass\[dq]: \[dq]DEEP_ARCHIVE\[dq] + } +] +\f[R] +.fi +.PP +Options: +.IP \[bu] 2 +\[dq]all\[dq]: if set then show all objects, not just ones with restore +status .SS list-multipart-uploads .PP List the unfinished multipart uploads @@ -30534,6 +33364,37 @@ It may return \[dq]Enabled\[dq], \[dq]Suspended\[dq] or \[dq]Unversioned\[dq]. Note that once versioning has been enabled the status can\[aq]t be set back to \[dq]Unversioned\[dq]. +.SS set +.PP +Set command for updating the config parameters. +.IP +.nf +\f[C] +rclone backend set remote: [options] [+] +\f[R] +.fi +.PP +This set command can be used to update the config parameters for a +running s3 backend. +.PP +Usage Examples: +.IP +.nf +\f[C] +rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X +\f[R] +.fi +.PP +The option keys are named as they are in the config file. +.PP +This rebuilds the connection to the s3 backend when it is called with +the new parameters. +Only new parameters need be passed as the values will default to those +currently in use. +.PP +It doesn\[aq]t return anything. .SS Anonymous access to public buckets .PP If you want to use rclone to access a public bucket, configure with a @@ -30696,7 +33557,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. \&... -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \[rs] (s3) \&... Storage> s3 @@ -30917,7 +33778,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \[rs] (s3) [snip] Storage> 5 @@ -31260,7 +34121,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \[rs] (s3) [snip] Storage> s3 @@ -31377,7 +34238,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \[rs] (s3) [snip] Storage> s3 @@ -31694,7 +34555,7 @@ Choose a number from below, or type in your own value \[rs] (alias) 4 / Amazon Drive \[rs] (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi \[rs] (s3) [snip] Storage> s3 @@ -32656,6 +35517,147 @@ d) Delete this remote y/e/d> y \f[R] .fi +.SS Leviia Cloud Object Storage +.PP +Leviia Object Storage (https://www.leviia.com/object-storage/), backup +and secure your data in a 100% French cloud, independent of GAFAM.. +.PP +To configure access to Leviia, follow the steps below: +.IP "1." 3 +Run \f[C]rclone config\f[R] and select \f[C]n\f[R] for a new remote. +.IP +.nf +\f[C] +rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +\f[R] +.fi +.IP "2." 3 +Give the name of the configuration. +For example, name it \[aq]leviia\[aq]. +.IP +.nf +\f[C] +name> leviia +\f[R] +.fi +.IP "3." 3 +Select \f[C]s3\f[R] storage. +.IP +.nf +\f[C] +Choose a number from below, or type in your own value + 1 / 1Fichier + \[rs] (fichier) + 2 / Akamai NetStorage + \[rs] (netstorage) + 3 / Alias for an existing remote + \[rs] (alias) + 4 / Amazon Drive + \[rs] (amazon cloud drive) + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \[rs] (s3) +[snip] +Storage> s3 +\f[R] +.fi +.IP "4." 3 +Select \f[C]Leviia\f[R] provider. +.IP +.nf +\f[C] +Choose a number from below, or type in your own value +1 / Amazon Web Services (AWS) S3 + \[rs] \[dq]AWS\[dq] +[snip] +15 / Leviia Object Storage + \[rs] (Leviia) +[snip] +provider> Leviia +\f[R] +.fi +.IP "5." 3 +Enter your SecretId and SecretKey of Leviia. +.IP +.nf +\f[C] +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get AWS credentials from the environment (env vars or IAM) + \[rs] \[dq]true\[dq] +env_auth> 1 +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +access_key_id> ZnIx.xxxxxxxxxxxxxxx +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +secret_access_key> xxxxxxxxxxx +\f[R] +.fi +.IP "6." 3 +Select endpoint for Leviia. +.IP +.nf +\f[C] + / The default endpoint + 1 | Leviia. + \[rs] (s3.leviia.com) +[snip] +endpoint> 1 +\f[R] +.fi +.IP "7." 3 +Choose acl. +.IP +.nf +\f[C] +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \[rs] (public-read) +[snip] +acl> 1 +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +-------------------- +[leviia] +- type: s3 +- provider: Leviia +- access_key_id: ZnIx.xxxxxxx +- secret_access_key: xxxxxxxx +- endpoint: s3.leviia.com +- acl: private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +leviia s3 +\f[R] +.fi .SS Liara .PP Here is an example of making a Liara Object @@ -33332,18 +36334,19 @@ of an rclone union remote. See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) -.SH Backblaze B2 +.SS Synology C2 Object Storage .PP -B2 is Backblaze\[aq]s cloud storage -system (https://www.backblaze.com/b2/). +Synology C2 Object +Storage (https://c2.synology.com/en-global/object-storage/overview) +provides a secure, S3-compatible, and cost-effective cloud storage +solution without API request, download fees, and deletion penalty. .PP -Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for -the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:bucket/path/to/dir\f[R]. -.SS Configuration +The S3 compatible gateway is configured using \f[C]rclone config\f[R] +with a type of \f[C]s3\f[R] and with a provider name of +\f[C]Synology\f[R]. +Here is an example run of the configurator. .PP -Here is an example of making a b2 configuration. -First run +First run: .IP .nf \f[C] @@ -33352,697 +36355,6 @@ rclone config .fi .PP This will guide you through an interactive setup process. -To authenticate you will either need your Account ID (a short hex -number) and Master Application Key (a long hex number) OR an Application -Key, which is the recommended method. -See below for further details on generating and using an Application -Key. -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -q) Quit config -n/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Backblaze B2 - \[rs] \[dq]b2\[dq] -[snip] -Storage> b2 -Account ID or Application Key ID -account> 123456789abc -Application Key -key> 0123456789abcdef0123456789abcdef0123456789 -Endpoint for the service - leave blank normally. -endpoint> -Remote config --------------------- -[remote] -account = 123456789abc -key = 0123456789abcdef0123456789abcdef0123456789 -endpoint = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this -.PP -See all buckets -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -Create a new bucket -.IP -.nf -\f[C] -rclone mkdir remote:bucket -\f[R] -.fi -.PP -List the contents of a bucket -.IP -.nf -\f[C] -rclone ls remote:bucket -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote bucket, deleting any -excess files in the bucket. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:bucket -\f[R] -.fi -.SS Application Keys -.PP -B2 supports multiple Application Keys for different access permission to -B2 Buckets (https://www.backblaze.com/b2/docs/application_keys.html). -.PP -You can use these with rclone too; you will need to use rclone version -1.43 or later. -.PP -Follow Backblaze\[aq]s docs to create an Application Key with the -required permission and add the \f[C]applicationKeyId\f[R] as the -\f[C]account\f[R] and the \f[C]Application Key\f[R] itself as the -\f[C]key\f[R]. -.PP -Note that you must put the \f[I]applicationKeyId\f[R] as the -\f[C]account\f[R] \[en] you can\[aq]t use the master Account ID. -If you try then B2 will return 401 errors. -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.SS Modified time -.PP -The modified time is stored as metadata on the object as -\f[C]X-Bz-Info-src_last_modified_millis\f[R] as milliseconds since -1970-01-01 in the Backblaze standard. -Other tools should be able to use this as a modified time. -.PP -Modified times are used in syncing and are fully supported. -Note that if a modification time needs to be updated on an object then -it will create a new version of the object. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.PP -Note that in 2020-05 Backblaze started allowing \ characters in file -names. -Rclone hasn\[aq]t changed its encoding as this could cause syncs to -re-transfer files. -If you want rclone not to replace \ then see the \f[C]--b2-encoding\f[R] -flag below and remove the \f[C]BackSlash\f[R] from the string. -This can be set in the config. -.SS SHA1 checksums -.PP -The SHA1 checksums of the files are checked on upload and download and -will be used in the syncing process. -.PP -Large files (bigger than the limit in \f[C]--b2-upload-cutoff\f[R]) -which are uploaded in chunks will store their SHA1 on the object as -\f[C]X-Bz-Info-large_file_sha1\f[R] as recommended by Backblaze. -.PP -For a large file to be uploaded with an SHA1 checksum, the source needs -to support SHA1 checksums. -The local disk supports SHA1 checksums so large file transfers from -local disk will have an SHA1. -See the overview (https://rclone.org/overview/#features) for exactly -which remotes support SHA1. -.PP -Sources which don\[aq]t support SHA1, in particular \f[C]crypt\f[R] will -upload large files without SHA1 checksums. -This may be fixed in the future (see -#1767 (https://github.com/rclone/rclone/issues/1767)). -.PP -Files sizes below \f[C]--b2-upload-cutoff\f[R] will always have an SHA1 -regardless of the source. -.SS Transfers -.PP -Backblaze recommends that you do lots of transfers simultaneously for -maximum speed. -In tests from my SSD equipped laptop the optimum setting is about -\f[C]--transfers 32\f[R] though higher numbers may be used for a slight -speed improvement. -The optimum number for you may vary depending on your hardware, how big -the files are, how much you want to load your computer, etc. -The default of \f[C]--transfers 4\f[R] is definitely too low for -Backblaze B2 though. -.PP -Note that uploading big files (bigger than 200 MiB by default) will use -a 96 MiB RAM buffer by default. -There can be at most \f[C]--transfers\f[R] of these in use at any -moment, so this sets the upper limit on the memory used. -.SS Versions -.PP -When rclone uploads a new version of a file it creates a new version of -it (https://www.backblaze.com/b2/docs/file_versions.html). -Likewise when you delete a file, the old version will be marked hidden -and still be available. -Conversely, you may opt in to a \[dq]hard delete\[dq] of files with the -\f[C]--b2-hard-delete\f[R] flag which would permanently remove the file -instead of hiding it. -.PP -Old versions of files, where available, are visible using the -\f[C]--b2-versions\f[R] flag. -.PP -It is also possible to view a bucket as it was at a certain point in -time, using the \f[C]--b2-version-at\f[R] flag. -This will show the file versions as they were at that time, showing -files that have been deleted afterwards, and hiding files that were -created since. -.PP -If you wish to remove all the old versions then you can use the -\f[C]rclone cleanup remote:bucket\f[R] command which will delete all the -old versions of files, leaving the current ones intact. -You can also supply a path and only old versions under that path will be -deleted, e.g. -\f[C]rclone cleanup remote:bucket/path/to/stuff\f[R]. -.PP -Note that \f[C]cleanup\f[R] will remove partially uploaded files from -the bucket if they are more than a day old. -.PP -When you \f[C]purge\f[R] a bucket, the current and the old versions will -be deleted then the bucket will be deleted. -.PP -However \f[C]delete\f[R] will cause the current versions of the files to -become hidden old versions. -.PP -Here is a session showing the listing and retrieval of an old version -followed by a \f[C]cleanup\f[R] of the old versions. -.PP -Show current version and all the versions with \f[C]--b2-versions\f[R] -flag. -.IP -.nf -\f[C] -$ rclone -q ls b2:cleanup-test - 9 one.txt - -$ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt - 8 one-v2016-07-04-141032-000.txt - 16 one-v2016-07-04-141003-000.txt - 15 one-v2016-07-02-155621-000.txt -\f[R] -.fi -.PP -Retrieve an old version -.IP -.nf -\f[C] -$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp - -$ ls -l /tmp/one-v2016-07-04-141003-000.txt --rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt -\f[R] -.fi -.PP -Clean up all the old versions and show that they\[aq]ve gone. -.IP -.nf -\f[C] -$ rclone -q cleanup b2:cleanup-test - -$ rclone -q ls b2:cleanup-test - 9 one.txt - -$ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt -\f[R] -.fi -.SS Data usage -.PP -It is useful to know how many requests are sent to the server in -different scenarios. -.PP -All copy commands send the following 4 requests: -.IP -.nf -\f[C] -/b2api/v1/b2_authorize_account -/b2api/v1/b2_create_bucket -/b2api/v1/b2_list_buckets -/b2api/v1/b2_list_file_names -\f[R] -.fi -.PP -The \f[C]b2_list_file_names\f[R] request will be sent once for every 1k -files in the remote path, providing the checksum and modification time -of the listed files. -As of version 1.33 issue -#818 (https://github.com/rclone/rclone/issues/818) causes extra requests -to be sent when using B2 with Crypt. -When a copy operation does not require any files to be uploaded, no more -requests will be sent. -.PP -Uploading files that do not require chunking, will send 2 requests per -file upload: -.IP -.nf -\f[C] -/b2api/v1/b2_get_upload_url -/b2api/v1/b2_upload_file/ -\f[R] -.fi -.PP -Uploading files requiring chunking, will send 2 requests (one each to -start and finish the upload) and another 2 requests for each chunk: -.IP -.nf -\f[C] -/b2api/v1/b2_start_large_file -/b2api/v1/b2_get_upload_part_url -/b2api/v1/b2_upload_part/ -/b2api/v1/b2_finish_large_file -\f[R] -.fi -.SS Versions -.PP -Versions can be viewed with the \f[C]--b2-versions\f[R] flag. -When it is set rclone will show and act on older versions of files. -For example -.PP -Listing without \f[C]--b2-versions\f[R] -.IP -.nf -\f[C] -$ rclone -q ls b2:cleanup-test - 9 one.txt -\f[R] -.fi -.PP -And with -.IP -.nf -\f[C] -$ rclone -q --b2-versions ls b2:cleanup-test - 9 one.txt - 8 one-v2016-07-04-141032-000.txt - 16 one-v2016-07-04-141003-000.txt - 15 one-v2016-07-02-155621-000.txt -\f[R] -.fi -.PP -Showing that the current version is unchanged but older versions can be -seen. -These have the UTC date that they were uploaded to the server to the -nearest millisecond appended to them. -.PP -Note that when using \f[C]--b2-versions\f[R] no file write operations -are permitted, so you can\[aq]t upload files or delete them. -.SS B2 and rclone link -.PP -Rclone supports generating file share links for private B2 buckets. -They can either be for a file for example: -.IP -.nf -\f[C] -\&./rclone link B2:bucket/path/to/file.txt -https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx -\f[R] -.fi -.PP -or if run on a directory you will get: -.IP -.nf -\f[C] -\&./rclone link B2:bucket/path -https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx -\f[R] -.fi -.PP -you can then use the authorization token (the part of the url from the -\f[C]?Authorization=\f[R] on) on any file path under that directory. -For example: -.IP -.nf -\f[C] -https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx -https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx -https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx -\f[R] -.fi -.SS Standard options -.PP -Here are the Standard options specific to b2 (Backblaze B2). -.SS --b2-account -.PP -Account ID or Application Key ID. -.PP -Properties: -.IP \[bu] 2 -Config: account -.IP \[bu] 2 -Env Var: RCLONE_B2_ACCOUNT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --b2-key -.PP -Application Key. -.PP -Properties: -.IP \[bu] 2 -Config: key -.IP \[bu] 2 -Env Var: RCLONE_B2_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --b2-hard-delete -.PP -Permanently delete files on remote removal, otherwise hide files. -.PP -Properties: -.IP \[bu] 2 -Config: hard_delete -.IP \[bu] 2 -Env Var: RCLONE_B2_HARD_DELETE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS Advanced options -.PP -Here are the Advanced options specific to b2 (Backblaze B2). -.SS --b2-endpoint -.PP -Endpoint for the service. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_B2_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --b2-test-mode -.PP -A flag string for X-Bz-Test-Mode header for debugging. -.PP -This is for debugging purposes only. -Setting it to one of the strings below will cause b2 to return specific -errors: -.IP \[bu] 2 -\[dq]fail_some_uploads\[dq] -.IP \[bu] 2 -\[dq]expire_some_account_authorization_tokens\[dq] -.IP \[bu] 2 -\[dq]force_cap_exceeded\[dq] -.PP -These will be set in the \[dq]X-Bz-Test-Mode\[dq] header which is -documented in the b2 integrations -checklist (https://www.backblaze.com/b2/docs/integration_checklist.html). -.PP -Properties: -.IP \[bu] 2 -Config: test_mode -.IP \[bu] 2 -Env Var: RCLONE_B2_TEST_MODE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --b2-versions -.PP -Include old versions in directory listings. -.PP -Note that when using this no file write operations are permitted, so you -can\[aq]t upload files or delete them. -.PP -Properties: -.IP \[bu] 2 -Config: versions -.IP \[bu] 2 -Env Var: RCLONE_B2_VERSIONS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --b2-version-at -.PP -Show file versions as they were at the specified time. -.PP -Note that when using this no file write operations are permitted, so you -can\[aq]t upload files or delete them. -.PP -Properties: -.IP \[bu] 2 -Config: version_at -.IP \[bu] 2 -Env Var: RCLONE_B2_VERSION_AT -.IP \[bu] 2 -Type: Time -.IP \[bu] 2 -Default: off -.SS --b2-upload-cutoff -.PP -Cutoff for switching to chunked upload. -.PP -Files above this size will be uploaded in chunks of -\[dq]--b2-chunk-size\[dq]. -.PP -This value should be set no larger than 4.657 GiB (== 5 GB). -.PP -Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_B2_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 200Mi -.SS --b2-copy-cutoff -.PP -Cutoff for switching to multipart copy. -.PP -Any files larger than this that need to be server-side copied will be -copied in chunks of this size. -.PP -The minimum is 0 and the maximum is 4.6 GiB. -.PP -Properties: -.IP \[bu] 2 -Config: copy_cutoff -.IP \[bu] 2 -Env Var: RCLONE_B2_COPY_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 4Gi -.SS --b2-chunk-size -.PP -Upload chunk size. -.PP -When uploading large files, chunk the file into this size. -.PP -Must fit in memory. -These chunks are buffered in memory and there might a maximum of -\[dq]--transfers\[dq] chunks in progress at once. -.PP -5,000,000 Bytes is the minimum size. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_B2_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 96Mi -.SS --b2-disable-checksum -.PP -Disable checksums for large (> upload cutoff) files. -.PP -Normally rclone will calculate the SHA1 checksum of the input before -uploading it so it can add it to metadata on the object. -This is great for data integrity checking but can cause long delays for -large files to start uploading. -.PP -Properties: -.IP \[bu] 2 -Config: disable_checksum -.IP \[bu] 2 -Env Var: RCLONE_B2_DISABLE_CHECKSUM -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --b2-download-url -.PP -Custom endpoint for downloads. -.PP -This is usually set to a Cloudflare CDN URL as Backblaze offers free -egress for data downloaded through the Cloudflare network. -Rclone works with private buckets by sending an \[dq]Authorization\[dq] -header. -If the custom endpoint rewrites the requests for authentication, e.g., -in Cloudflare Workers, this header needs to be handled properly. -Leave blank if you want to use the endpoint provided by Backblaze. -.PP -The URL provided here SHOULD have the protocol and SHOULD NOT have a -trailing slash or specify the /file/bucket subpath as rclone will -request files with \[dq]{download_url}/file/{bucket_name}/{path}\[dq]. -.PP -Example: > https://mysubdomain.mydomain.tld (No trailing \[dq]/\[dq], -\[dq]file\[dq] or \[dq]bucket\[dq]) -.PP -Properties: -.IP \[bu] 2 -Config: download_url -.IP \[bu] 2 -Env Var: RCLONE_B2_DOWNLOAD_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --b2-download-auth-duration -.PP -Time before the authorization token will expire in s or suffix -ms|s|m|h|d. -.PP -The duration before the download authorization token will expire. -The minimum value is 1 second. -The maximum value is one week. -.PP -Properties: -.IP \[bu] 2 -Config: download_auth_duration -.IP \[bu] 2 -Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1w -.SS --b2-memory-pool-flush-time -.PP -How often internal memory buffer pools will be flushed. -Uploads which requires additional buffers (f.e multipart) will use -memory pool for allocations. -This option controls how often unused buffers will be removed from the -pool. -.PP -Properties: -.IP \[bu] 2 -Config: memory_pool_flush_time -.IP \[bu] 2 -Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --b2-memory-pool-use-mmap -.PP -Whether to use mmap buffers in internal memory pool. -.PP -Properties: -.IP \[bu] 2 -Config: memory_pool_use_mmap -.IP \[bu] 2 -Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --b2-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_B2_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.PP -\f[C]rclone about\f[R] is not supported by the B2 backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Box -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.PP -The initial setup for Box involves getting a token from Box which you -can do either in your browser, or with a config.json downloaded from Box -to use JWT authentication. -\f[C]rclone config\f[R] walks you through it. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: .IP .nf \f[C] @@ -34050,4538 +36362,4217 @@ No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config + n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Box - \[rs] \[dq]box\[dq] -[snip] -Storage> box -Box App Client Id - leave blank normally. -client_id> -Box App Client Secret - leave blank normally. -client_secret> -Box App config.json location -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -box_config_file> -Box App Primary Access Token -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -access_token> -Enter a string value. Press Enter for the default (\[dq]user\[dq]). -Choose a number from below, or type in your own value - 1 / Rclone should act on behalf of a user - \[rs] \[dq]user\[dq] - 2 / Rclone should act on behalf of a service account - \[rs] \[dq]enterprise\[dq] -box_sub_type> -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = -client_secret = -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]XXX\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Box. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your Box -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your Box -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to an Box directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Using rclone with an Enterprise account with SSO -.PP -If you have an \[dq]Enterprise\[dq] account type with Box with single -sign on (SSO), you need to create a password to use Box with rclone. -This can be done at your Enterprise Box account by going to Settings, -\[dq]Account\[dq] Tab, and then set the password in the -\[dq]Authentication\[dq] field. -.PP -Once you have done this, you can setup your Enterprise Box account using -the same procedure detailed above in the, using the password you have -just set. -.SS Invalid refresh token -.PP -According to the box -docs (https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): -.RS -.PP -Each refresh_token is valid for one use in 60 days. -.RE -.PP -This means that if you -.IP \[bu] 2 -Don\[aq]t use the box remote for 60 days -.IP \[bu] 2 -Copy the config file with a box refresh token in and use it in two -places -.IP \[bu] 2 -Get an error on a token refresh -.PP -then rclone will return an error which includes the text -\f[C]Invalid refresh token\f[R]. -.PP -To fix this you will need to use oauth2 again to update the refresh -token. -You can use the methods in the remote setup -docs (https://rclone.org/remote_setup/), bearing in mind that if you use -the copy the config file method, you should not use that remote on the -computer you did the authentication on. -.PP -Here is how to do it. -.IP -.nf -\f[C] -$ rclone config -Current remotes: +Enter name for new remote.1 +name> syno -Name Type -==== ==== -remote box - -e) Edit existing remote -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> e -Choose a number from below, or type in an existing value - 1 > remote -remote> remote --------------------- -[remote] -type = box -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2017-07-08T23:40:08.059167677+01:00\[dq]} --------------------- -Edit remote -Value \[dq]client_id\[dq] = \[dq]\[dq] -Edit? (y/n)> -y) Yes -n) No -y/n> n -Value \[dq]client_secret\[dq] = \[dq]\[dq] -Edit? (y/n)> -y) Yes -n) No -y/n> n -Remote config -Already have a token - refresh? -y) Yes -n) No -y/n> y -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -type = box -token = {\[dq]access_token\[dq]:\[dq]YYY\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]YYY\[dq],\[dq]expiry\[dq]:\[dq]2017-07-23T12:22:29.259137901+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS Modified time and hashes -.PP -Box allows modification times to be set on objects accurate to 1 second. -These will be used to detect whether objects need syncing or not. -.PP -Box supports SHA1 type hashes, so you can use the \f[C]--checksum\f[R] -flag. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -File names can also not end with the following characters. -These only get replaced if they are the last character in the name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Transfers -.PP -For files above 50 MiB rclone will use a chunked transfer. -Rclone will upload up to \f[C]--transfers\f[R] chunks at the same time -(shared among all the multipart uploads). -Chunks are buffered in memory and are normally 8 MiB so increasing -\f[C]--transfers\f[R] will increase memory use. -.SS Deleting files -.PP -Depending on the enterprise settings for your user, the item will either -be actually deleted from Box or moved to the trash. -.PP -Emptying the trash is supported via the rclone however cleanup command -however this deletes every trashed file and folder individually so it -may take a very long time. -Emptying the trash via the WebUI does not have this limitation so it is -advised to empty the trash via the WebUI. -.SS Root folder ID -.PP -You can set the \f[C]root_folder_id\f[R] for rclone. -This is the directory (identified by its \f[C]Folder ID\f[R]) that -rclone considers to be the root of your Box drive. -.PP -Normally you will leave this blank and rclone will determine the correct -root to use itself. -.PP -However you can set this to restrict rclone to a specific folder -hierarchy. -.PP -In order to do this you will have to find the \f[C]Folder ID\f[R] of the -directory you wish rclone to display. -This will be the last segment of the URL when you open the relevant -folder in the Box web interface. -.PP -So if the folder you want rclone to use has a URL which looks like -\f[C]https://app.box.com/folder/11xxxxxxxxx8\f[R] in the browser, then -you use \f[C]11xxxxxxxxx8\f[R] as the \f[C]root_folder_id\f[R] in the -config. -.SS Standard options -.PP -Here are the Standard options specific to box (Box). -.SS --box-client-id -.PP -OAuth Client Id. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_BOX_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-client-secret -.PP -OAuth Client Secret. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_BOX_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-box-config-file -.PP -Box App config.json location -.PP -Leave blank normally. -.PP -Leading \f[C]\[ti]\f[R] will be expanded in the file name as will -environment variables such as \f[C]${RCLONE_CONFIG_DIR}\f[R]. -.PP -Properties: -.IP \[bu] 2 -Config: box_config_file -.IP \[bu] 2 -Env Var: RCLONE_BOX_BOX_CONFIG_FILE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-access-token -.PP -Box App Primary Access Token -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: access_token -.IP \[bu] 2 -Env Var: RCLONE_BOX_ACCESS_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-box-sub-type -.PP -Properties: -.IP \[bu] 2 -Config: box_sub_type -.IP \[bu] 2 -Env Var: RCLONE_BOX_BOX_SUB_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]user\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]user\[dq] -.RS 2 -.IP \[bu] 2 -Rclone should act on behalf of a user. -.RE -.IP \[bu] 2 -\[dq]enterprise\[dq] -.RS 2 -.IP \[bu] 2 -Rclone should act on behalf of a service account. -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to box (Box). -.SS --box-token -.PP -OAuth Access Token as a JSON blob. -.PP -Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_BOX_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-auth-url -.PP -Auth server URL. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_BOX_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-token-url -.PP -Token server url. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_BOX_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-root-folder-id -.PP -Fill in for rclone to use a non root folder as its starting point. -.PP -Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_BOX_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]0\[dq] -.SS --box-upload-cutoff -.PP -Cutoff for switching to multipart upload (>= 50 MiB). -.PP -Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_BOX_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 50Mi -.SS --box-commit-retries -.PP -Max number of times to try committing a multipart file. -.PP -Properties: -.IP \[bu] 2 -Config: commit_retries -.IP \[bu] 2 -Env Var: RCLONE_BOX_COMMIT_RETRIES -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 100 -.SS --box-list-chunk -.PP -Size of listing chunk 1-1000. -.PP -Properties: -.IP \[bu] 2 -Config: list_chunk -.IP \[bu] 2 -Env Var: RCLONE_BOX_LIST_CHUNK -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 1000 -.SS --box-owned-by -.PP -Only show items owned by the login (email address) passed in. -.PP -Properties: -.IP \[bu] 2 -Config: owned_by -.IP \[bu] 2 -Env Var: RCLONE_BOX_OWNED_BY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --box-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_BOX_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot -.SS Limitations -.PP -Note that Box is case insensitive so you can\[aq]t have a file called -\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -Box file names can\[aq]t have the \f[C]\[rs]\f[R] character in. -rclone maps this to and from an identical looking unicode equivalent -\f[C]\[uFF3C]\f[R] (U+FF3C Fullwidth Reverse Solidus). -.PP -Box only supports filenames up to 255 characters in length. -.PP -Box has API rate -limits (https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) -that sometimes reduce the speed of rclone. -.PP -\f[C]rclone about\f[R] is not supported by the Box backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Cache -.PP -The \f[C]cache\f[R] remote wraps another existing remote and stores file -structure and its data for long running tasks like -\f[C]rclone mount\f[R]. -.SS Status -.PP -The cache backend code is working but it currently doesn\[aq]t have a -maintainer so there are outstanding -bugs (https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) -which aren\[aq]t getting fixed. -.PP -The cache backend is due to be phased out in favour of the VFS caching -layer eventually which is more tightly integrated into rclone. -.PP -Until this happens we recommend only using the cache backend if you find -you can\[aq]t work without it. -There are many docs online describing the use of the cache backend to -minimize API hits and by-and-large these are out of date and the cache -backend isn\[aq]t needed in those scenarios any more. -.SS Configuration -.PP -To get started you just need to have an existing remote which can be -configured with \f[C]cache\f[R]. -.PP -Here is an example of how to make a remote called \f[C]test-cache\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> test-cache Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Cache a remote - \[rs] \[dq]cache\[dq] -[snip] -Storage> cache -Remote to cache. -Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir\[dq], -\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). -remote> local:/test -Optional: The URL of the Plex server -plex_url> http://127.0.0.1:32400 -Optional: The username of the Plex user -plex_username> dummyusername -Optional: The password of the Plex user -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank -y/g/n> y -Enter the password: -password: -Confirm the password: -password: -The size of a chunk. Lower value good for slow connections but can affect seamless reading. -Default: 5M -Choose a number from below, or type in your own value - 1 / 1 MiB - \[rs] \[dq]1M\[dq] - 2 / 5 MiB - \[rs] \[dq]5M\[dq] - 3 / 10 MiB - \[rs] \[dq]10M\[dq] -chunk_size> 2 -How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don\[aq]t plan on changing the source FS from outside the cache. -Accepted units are: \[dq]s\[dq], \[dq]m\[dq], \[dq]h\[dq]. -Default: 5m -Choose a number from below, or type in your own value - 1 / 1 hour - \[rs] \[dq]1h\[dq] - 2 / 24 hours - \[rs] \[dq]24h\[dq] - 3 / 24 hours - \[rs] \[dq]48h\[dq] -info_age> 2 -The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. -Default: 10G -Choose a number from below, or type in your own value - 1 / 500 MiB - \[rs] \[dq]500M\[dq] - 2 / 1 GiB - \[rs] \[dq]1G\[dq] - 3 / 10 GiB - \[rs] \[dq]10G\[dq] -chunk_total_size> 3 -Remote config --------------------- -[test-cache] -remote = local:/test -plex_url = http://127.0.0.1:32400 -plex_username = dummyusername -plex_password = *** ENCRYPTED *** -chunk_size = 5M -info_age = 48h -chunk_total_size = 10G -\f[R] -.fi -.PP -You can then use it like this, -.PP -List directories in top level of your drive -.IP -.nf -\f[C] -rclone lsd test-cache: -\f[R] -.fi -.PP -List all the files in your drive -.IP -.nf -\f[C] -rclone ls test-cache: -\f[R] -.fi -.PP -To start a cached mount -.IP -.nf -\f[C] -rclone mount --allow-other test-cache: /var/tmp/test-cache -\f[R] -.fi -.SS Write Features -.SS Offline uploading -.PP -In an effort to make writing through cache more reliable, the backend -now supports this feature which can be activated by specifying a -\f[C]cache-tmp-upload-path\f[R]. -.PP -A files goes through these states when using this feature: -.IP "1." 3 -An upload is started (usually by copying a file on the cache remote) -.IP "2." 3 -When the copy to the temporary location is complete the file is part of -the cached remote and looks and behaves like any other file (reading -included) -.IP "3." 3 -After \f[C]cache-tmp-wait-time\f[R] passes and the file is next in line, -\f[C]rclone move\f[R] is used to move the file to the cloud provider -.IP "4." 3 -Reading the file still works during the upload but most modifications on -it will be prohibited -.IP "5." 3 -Once the move is complete the file is unlocked for modifications as it -becomes as any other regular file -.IP "6." 3 -If the file is being read through \f[C]cache\f[R] when it\[aq]s actually -deleted from the temporary path then \f[C]cache\f[R] will simply swap -the source to the cloud provider without interrupting the reading (small -blip can happen though) -.PP -Files are uploaded in sequence and only one file is uploaded at a time. -Uploads will be stored in a queue and be processed based on the order -they were added. -The queue and the temporary storage is persistent across restarts but -can be cleared on startup with the \f[C]--cache-db-purge\f[R] flag. -.SS Write Support -.PP -Writes are supported through \f[C]cache\f[R]. -One caveat is that a mounted cache remote does not add any retry or -fallback mechanism to the upload operation. -This will depend on the implementation of the wrapped remote. -Consider using \f[C]Offline uploading\f[R] for reliable writes. -.PP -One special case is covered with \f[C]cache-writes\f[R] which will cache -the file data at the same time as the upload when it is enabled making -it available from the cache store immediately once the upload is -finished. -.SS Read Features -.SS Multiple connections -.PP -To counter the high latency between a local PC where rclone is running -and cloud providers, the cache remote can split multiple requests to the -cloud provider for smaller file chunks and combines them together -locally where they can be available almost immediately before the reader -usually needs them. -.PP -This is similar to buffering when media files are played online. -Rclone will stay around the current marker but always try its best to -stay ahead and prepare the data before. -.SS Plex Integration -.PP -There is a direct integration with Plex which allows cache to detect -during reading if the file is in playback or not. -This helps cache to adapt how it queries the cloud provider depending on -what is needed for. -.PP -Scans will have a minimum amount of workers (1) while in a confirmed -playback cache will deploy the configured number of workers. -.PP -This integration opens the doorway to additional performance -improvements which will be explored in the near future. -.PP -\f[B]Note:\f[R] If Plex options are not configured, \f[C]cache\f[R] will -function with its configured options without adapting any of its -settings. -.PP -How to enable? -Run \f[C]rclone config\f[R] and add all the Plex options (endpoint, -username and password) in your remote and it will be automatically -enabled. -.PP -Affected settings: - \f[C]cache-workers\f[R]: \f[I]Configured value\f[R] -during confirmed playback or \f[I]1\f[R] all the other times -.SS Certificate Validation -.PP -When the Plex server is configured to only accept secure connections, it -is possible to use \f[C].plex.direct\f[R] URLs to ensure certificate -validation succeeds. -These URLs are used by Plex internally to connect to the Plex server -securely. -.PP -The format for these URLs is the following: -.PP -\f[C]https://ip-with-dots-replaced.server-hash.plex.direct:32400/\f[R] -.PP -The \f[C]ip-with-dots-replaced\f[R] part can be any IPv4 address, where -the dots have been replaced with dashes, e.g. -\f[C]127.0.0.1\f[R] becomes \f[C]127-0-0-1\f[R]. -.PP -To get the \f[C]server-hash\f[R] part, the easiest way is to visit -.PP -https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token -.PP -This page will list all the available Plex servers for your account with -at least one \f[C].plex.direct\f[R] link for each. -Copy one URL and replace the IP address with the desired address. -This can be used as the \f[C]plex_url\f[R] value. -.SS Known issues -.SS Mount and --dir-cache-time -.PP ---dir-cache-time controls the first layer of directory caching which -works at the mount layer. -Being an independent caching mechanism from the \f[C]cache\f[R] backend, -it will manage its own entries based on the configured time. -.PP -To avoid getting in a scenario where dir cache has obsolete data and -cache would have the correct one, try to set \f[C]--dir-cache-time\f[R] -to a lower time than \f[C]--cache-info-age\f[R]. -Default values are already configured in this way. -.SS Windows support - Experimental -.PP -There are a couple of issues with Windows \f[C]mount\f[R] functionality -that still require some investigations. -It should be considered as experimental thus far as fixes come in for -this OS. -.PP -Most of the issues seem to be related to the difference between -filesystems on Linux flavors and Windows as cache is heavily dependent -on them. -.PP -Any reports or feedback on how cache behaves on this OS is greatly -appreciated. -.IP \[bu] 2 -https://github.com/rclone/rclone/issues/1935 -.IP \[bu] 2 -https://github.com/rclone/rclone/issues/1907 -.IP \[bu] 2 -https://github.com/rclone/rclone/issues/1834 -.SS Risk of throttling -.PP -Future iterations of the cache backend will make use of the pooling -functionality of the cloud provider to synchronize and at the same time -make writing through it more tolerant to failures. -.PP -There are a couple of enhancements in track to add these but in the -meantime there is a valid concern that the expiring cache listings can -lead to cloud provider throttles or bans due to repeated queries on it -for very large mounts. -.PP -Some recommendations: - don\[aq]t use a very small interval for entry -information (\f[C]--cache-info-age\f[R]) - while writes aren\[aq]t yet -optimised, you can still write through \f[C]cache\f[R] which gives you -the advantage of adding the file in the cache at the same time if -configured to do so. -.PP -Future enhancements: -.IP \[bu] 2 -https://github.com/rclone/rclone/issues/1937 -.IP \[bu] 2 -https://github.com/rclone/rclone/issues/1936 -.SS cache and crypt -.PP -One common scenario is to keep your data encrypted in the cloud provider -using the \f[C]crypt\f[R] remote. -\f[C]crypt\f[R] uses a similar technique to wrap around an existing -remote and handles this translation in a seamless way. -.PP -There is an issue with wrapping the remotes in this order: \f[B]cloud -remote\f[R] -> \f[B]crypt\f[R] -> \f[B]cache\f[R] -.PP -During testing, I experienced a lot of bans with the remotes in this -order. -I suspect it might be related to how crypt opens files on the cloud -provider which makes it think we\[aq]re downloading the full file -instead of small chunks. -Organizing the remotes in this order yields better results: \f[B]cloud -remote\f[R] -> \f[B]cache\f[R] -> \f[B]crypt\f[R] -.SS absolute remote paths -.PP -\f[C]cache\f[R] can not differentiate between relative and absolute -paths for the wrapped remote. -Any path given in the \f[C]remote\f[R] config setting and on the command -line will be passed to the wrapped remote as is, but for storing the -chunks on disk the path will be made relative by removing any leading -\f[C]/\f[R] character. -.PP -This behavior is irrelevant for most backend types, but there are -backends where a leading \f[C]/\f[R] changes the effective directory, -e.g. -in the \f[C]sftp\f[R] backend paths starting with a \f[C]/\f[R] are -relative to the root of the SSH server and paths without are relative to -the user home directory. -As a result \f[C]sftp:bin\f[R] and \f[C]sftp:/bin\f[R] will share the -same cache folder, even if they represent a different directory on the -SSH server. -.SS Cache and Remote Control (--rc) -.PP -Cache supports the new \f[C]--rc\f[R] mode in rclone and can be remote -controlled through the following end points: By default, the listener is -disabled if you do not add the flag. -.SS rc cache/expire -.PP -Purge a remote from the cache backend. -Supports either a directory or a file. -It supports both encrypted and unencrypted file names if cache is -wrapped by crypt. -.PP -Params: - \f[B]remote\f[R] = path to remote \f[B](required)\f[R] - -\f[B]withData\f[R] = true/false to delete cached data (chunks) as well -\f[I](optional, false by default)\f[R] -.SS Standard options -.PP -Here are the Standard options specific to cache (Cache a remote). -.SS --cache-remote -.PP -Remote to cache. -.PP -Normally should contain a \[aq]:\[aq] and a path, e.g. -\[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or maybe -\[dq]myremote:\[dq] (not recommended). -.PP -Properties: -.IP \[bu] 2 -Config: remote -.IP \[bu] 2 -Env Var: RCLONE_CACHE_REMOTE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --cache-plex-url -.PP -The URL of the Plex server. -.PP -Properties: -.IP \[bu] 2 -Config: plex_url -.IP \[bu] 2 -Env Var: RCLONE_CACHE_PLEX_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-plex-username -.PP -The username of the Plex user. -.PP -Properties: -.IP \[bu] 2 -Config: plex_username -.IP \[bu] 2 -Env Var: RCLONE_CACHE_PLEX_USERNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-plex-password -.PP -The password of the Plex user. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP -Properties: -.IP \[bu] 2 -Config: plex_password -.IP \[bu] 2 -Env Var: RCLONE_CACHE_PLEX_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-chunk-size -.PP -The size of a chunk (partial file data). -.PP -Use lower numbers for slower connections. -If the chunk size is changed, any downloaded chunks will be invalid and -cache-chunk-path will need to be cleared or unexpected EOF errors will -occur. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_CACHE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 5Mi -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]1M\[dq] -.RS 2 -.IP \[bu] 2 -1 MiB -.RE -.IP \[bu] 2 -\[dq]5M\[dq] -.RS 2 -.IP \[bu] 2 -5 MiB -.RE -.IP \[bu] 2 -\[dq]10M\[dq] -.RS 2 -.IP \[bu] 2 -10 MiB -.RE -.RE -.SS --cache-info-age -.PP -How long to cache file structure information (directory listings, file -size, times, etc.). -If all write operations are done through the cache then you can safely -make this value very large as the cache store will also be updated in -real time. -.PP -Properties: -.IP \[bu] 2 -Config: info_age -.IP \[bu] 2 -Env Var: RCLONE_CACHE_INFO_AGE -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 6h0m0s -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]1h\[dq] -.RS 2 -.IP \[bu] 2 -1 hour -.RE -.IP \[bu] 2 -\[dq]24h\[dq] -.RS 2 -.IP \[bu] 2 -24 hours -.RE -.IP \[bu] 2 -\[dq]48h\[dq] -.RS 2 -.IP \[bu] 2 -48 hours -.RE -.RE -.SS --cache-chunk-total-size -.PP -The total size that the chunks can take up on the local disk. -.PP -If the cache exceeds this value then it will start to delete the oldest -chunks until it goes under this value. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_total_size -.IP \[bu] 2 -Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Gi -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]500M\[dq] -.RS 2 -.IP \[bu] 2 -500 MiB -.RE -.IP \[bu] 2 -\[dq]1G\[dq] -.RS 2 -.IP \[bu] 2 -1 GiB -.RE -.IP \[bu] 2 -\[dq]10G\[dq] -.RS 2 -.IP \[bu] 2 -10 GiB -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to cache (Cache a remote). -.SS --cache-plex-token -.PP -The plex token for authentication - auto set normally. -.PP -Properties: -.IP \[bu] 2 -Config: plex_token -.IP \[bu] 2 -Env Var: RCLONE_CACHE_PLEX_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-plex-insecure -.PP -Skip all certificate verification when connecting to the Plex server. -.PP -Properties: -.IP \[bu] 2 -Config: plex_insecure -.IP \[bu] 2 -Env Var: RCLONE_CACHE_PLEX_INSECURE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-db-path -.PP -Directory to store file structure metadata DB. -.PP -The remote name is used as the DB file name. -.PP -Properties: -.IP \[bu] 2 -Config: db_path -.IP \[bu] 2 -Env Var: RCLONE_CACHE_DB_PATH -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]$HOME/.cache/rclone/cache-backend\[dq] -.SS --cache-chunk-path -.PP -Directory to cache chunk files. -.PP -Path to where partial file data (chunks) are stored locally. -The remote name is appended to the final path. -.PP -This config follows the \[dq]--cache-db-path\[dq]. -If you specify a custom location for \[dq]--cache-db-path\[dq] and -don\[aq]t specify one for \[dq]--cache-chunk-path\[dq] then -\[dq]--cache-chunk-path\[dq] will use the same path as -\[dq]--cache-db-path\[dq]. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_path -.IP \[bu] 2 -Env Var: RCLONE_CACHE_CHUNK_PATH -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]$HOME/.cache/rclone/cache-backend\[dq] -.SS --cache-db-purge -.PP -Clear all the cached data for this remote on start. -.PP -Properties: -.IP \[bu] 2 -Config: db_purge -.IP \[bu] 2 -Env Var: RCLONE_CACHE_DB_PURGE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --cache-chunk-clean-interval -.PP -How often should the cache perform cleanups of the chunk storage. -.PP -The default value should be ok for most people. -If you find that the cache goes over \[dq]cache-chunk-total-size\[dq] -too often then try to lower this value to force it to perform cleanups -more often. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_clean_interval -.IP \[bu] 2 -Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --cache-read-retries -.PP -How many times to retry a read from a cache storage. -.PP -Since reading from a cache stream is independent from downloading file -data, readers can get to a point where there\[aq]s no more data in the -cache. -Most of the times this can indicate a connectivity issue if cache -isn\[aq]t able to provide file data anymore. -.PP -For really slow connections, increase this to a point where the stream -is able to provide data but your experience will be very stuttering. -.PP -Properties: -.IP \[bu] 2 -Config: read_retries -.IP \[bu] 2 -Env Var: RCLONE_CACHE_READ_RETRIES -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 10 -.SS --cache-workers -.PP -How many workers should run in parallel to download chunks. -.PP -Higher values will mean more parallel processing (better CPU needed) and -more concurrent requests on the cloud provider. -This impacts several aspects like the cloud provider API limits, more -stress on the hardware that rclone runs on but it also means that -streams will be more fluid and data will be available much more faster -to readers. -.PP -\f[B]Note\f[R]: If the optional Plex integration is enabled then this -setting will adapt to the type of reading performed and the value -specified here will be used as a maximum number of workers to use. -.PP -Properties: -.IP \[bu] 2 -Config: workers -.IP \[bu] 2 -Env Var: RCLONE_CACHE_WORKERS -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 4 -.SS --cache-chunk-no-memory -.PP -Disable the in-memory cache for storing chunks during streaming. -.PP -By default, cache will keep file data during streaming in RAM as well to -provide it to readers as fast as possible. -.PP -This transient data is evicted as soon as it is read and the number of -chunks stored doesn\[aq]t exceed the number of workers. -However, depending on other settings like \[dq]cache-chunk-size\[dq] and -\[dq]cache-workers\[dq] this footprint can increase if there are -parallel streams too (multiple files being read at the same time). -.PP -If the hardware permits it, use this feature to provide an overall -better performance during streaming but it can also be disabled if RAM -is not available on the local machine. -.PP -Properties: -.IP \[bu] 2 -Config: chunk_no_memory -.IP \[bu] 2 -Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --cache-rps -.PP -Limits the number of requests per second to the source FS (-1 to -disable). -.PP -This setting places a hard limit on the number of requests per second -that cache will be doing to the cloud provider remote and try to respect -that value by setting waits between reads. -.PP -If you find that you\[aq]re getting banned or limited on the cloud -provider through cache and know that a smaller number of requests per -second will allow you to work with it then you can use this setting for -that. -.PP -A good balance of all the other settings should make this setting -useless but it is available to set for more special cases. -.PP -\f[B]NOTE\f[R]: This will limit the number of requests during streams -but other API calls to the cloud provider like directory listings will -still pass. -.PP -Properties: -.IP \[bu] 2 -Config: rps -.IP \[bu] 2 -Env Var: RCLONE_CACHE_RPS -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: -1 -.SS --cache-writes -.PP -Cache file data on writes through the FS. -.PP -If you need to read files immediately after you upload them through -cache you can enable this flag to have their data stored in the cache -store at the same time during upload. -.PP -Properties: -.IP \[bu] 2 -Config: writes -.IP \[bu] 2 -Env Var: RCLONE_CACHE_WRITES -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --cache-tmp-upload-path -.PP -Directory to keep temporary files until they are uploaded. -.PP -This is the path where cache will use as a temporary storage for new -files that need to be uploaded to the cloud provider. -.PP -Specifying a value will enable this feature. -Without it, it is completely disabled and files will be uploaded -directly to the cloud provider -.PP -Properties: -.IP \[bu] 2 -Config: tmp_upload_path -.IP \[bu] 2 -Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --cache-tmp-wait-time -.PP -How long should files be stored in local cache before being uploaded. -.PP -This is the duration that a file must wait in the temporary location -\f[I]cache-tmp-upload-path\f[R] before it is selected for upload. -.PP -Note that only one file is uploaded at a time and it can take longer to -start the upload if a queue formed for this purpose. -.PP -Properties: -.IP \[bu] 2 -Config: tmp_wait_time -.IP \[bu] 2 -Env Var: RCLONE_CACHE_TMP_WAIT_TIME -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 15s -.SS --cache-db-wait-time -.PP -How long to wait for the DB to be available - 0 is unlimited. -.PP -Only one process can have the DB open at any one time, so rclone waits -for this duration for the DB to become available before it gives an -error. -.PP -If you set it to 0 then it will wait forever. -.PP -Properties: -.IP \[bu] 2 -Config: db_wait_time -.IP \[bu] 2 -Env Var: RCLONE_CACHE_DB_WAIT_TIME -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1s -.SS Backend commands -.PP -Here are the commands specific to the cache backend. -.PP -Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP -The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP -These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS stats -.PP -Print stats on the cache backend in JSON format. -.IP -.nf -\f[C] -rclone backend stats remote: [options] [+] -\f[R] -.fi -.SH Chunker -.PP -The \f[C]chunker\f[R] overlay transparently splits large files into -smaller chunks during upload to wrapped remote and transparently -assembles them back when the file is downloaded. -This allows to effectively overcome size limits imposed by storage -providers. -.SS Configuration -.PP -To use it, first set up the underlying remote following the -configuration instructions for that remote. -You can also use a local pathname instead of a remote. -.PP -First check your chosen remote is working - we\[aq]ll call it -\f[C]remote:path\f[R] here. -Note that anything inside \f[C]remote:path\f[R] will be chunked and -anything outside won\[aq]t. -This means that if you are using a bucket-based remote (e.g. -S3, B2, swift) then you should probably put the bucket in the remote -\f[C]s3:bucket\f[R]. -.PP -Now configure \f[C]chunker\f[R] using \f[C]rclone config\f[R]. -We will call this one \f[C]overlay\f[R] to separate it from the -\f[C]remote\f[R] itself. -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> overlay -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Transparently chunk/split large files - \[rs] \[dq]chunker\[dq] -[snip] -Storage> chunker -Remote to chunk/unchunk. -Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir\[dq], -\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). Enter a string value. Press Enter for the default (\[dq]\[dq]). -remote> remote:path -Files larger than chunk size will be split in chunks. -Enter a size with suffix K,M,G,T. Press Enter for the default (\[dq]2G\[dq]). -chunk_size> 100M -Choose how chunker handles hash sums. All modes but \[dq]none\[dq] require metadata. -Enter a string value. Press Enter for the default (\[dq]md5\[dq]). Choose a number from below, or type in your own value - 1 / Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise - \[rs] \[dq]none\[dq] - 2 / MD5 for composite files - \[rs] \[dq]md5\[dq] - 3 / SHA1 for composite files - \[rs] \[dq]sha1\[dq] - 4 / MD5 for all files - \[rs] \[dq]md5all\[dq] - 5 / SHA1 for all files - \[rs] \[dq]sha1all\[dq] - 6 / Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported - \[rs] \[dq]md5quick\[dq] - 7 / Similar to \[dq]md5quick\[dq] but prefers SHA1 over MD5 - \[rs] \[dq]sha1quick\[dq] -hash_type> md5 + + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + \[rs] \[dq]s3\[dq] + +Storage> s3 + +Choose your S3 provider. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + 24 / Synology C2 Object Storage + \[rs] (Synology) + +provider> Synology + +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get AWS credentials from the environment (env vars or IAM) + \[rs] \[dq]true\[dq] + +env_auth> 1 + +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (\[dq]\[dq]). + +access_key_id> accesskeyid + +AWS Secret Access Key (password) +Leave blank for anonymous access or runtime credentials. +Enter a string value. Press Enter for the default (\[dq]\[dq]). + +secret_access_key> secretaccesskey + +Region where your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Europe Region 1 + \[rs] (eu-001) + 2 / Europe Region 2 + \[rs] (eu-002) + 3 / US Region 1 + \[rs] (us-001) + 4 / US Region 2 + \[rs] (us-002) + 5 / Asia (Taiwan) + \[rs] (tw-001) + +region > 1 + +Option endpoint. +Endpoint for Synology C2 Object Storage API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / EU Endpoint 1 + \[rs] (eu-001.s3.synologyc2.net) + 2 / US Endpoint 1 + \[rs] (us-001.s3.synologyc2.net) + 3 / TW Endpoint 1 + \[rs] (tw-001.s3.synologyc2.net) + +endpoint> 1 + +Option location_constraint. +Location constraint - must be set to match the Region. +Leave blank if not sure. Used when creating buckets only. +Enter a value. Press Enter to leave empty. +location_constraint> + Edit advanced config? (y/n) y) Yes n) No -y/n> n -Remote config --------------------- -[overlay] -type = chunker -remote = remote:bucket -chunk_size = 100M -hash_type = md5 --------------------- -y) Yes this is OK +y/n> y + +Option no_check_bucket. +If set, don\[aq]t attempt to check the bucket exists or create it. +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. +It can also be needed if the user you are using does not have bucket +creation permissions. Before v1.52.0 this would have passed silently +due to a bug. +Enter a boolean value (true or false). Press Enter for the default (true). + +no_check_bucket> true + +Configuration complete. +Options: +- type: s3 +- provider: Synology +- region: eu-001 +- endpoint: eu-001.s3.synologyc2.net +- no_check_bucket: true +Keep this \[dq]syno\[dq] remote? +y) Yes this is OK (default) e) Edit this remote d) Delete this remote + y/e/d> y + +# Backblaze B2 + +B2 is [Backblaze\[aq]s cloud storage system](https://www.backblaze.com/b2/). + +Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. + +## Configuration + +Here is an example of making a b2 configuration. First run + + rclone config + +This will guide you through an interactive setup process. To authenticate +you will either need your Account ID (a short hex number) and Master +Application Key (a long hex number) OR an Application Key, which is the +recommended method. See below for further details on generating and using +an Application Key. \f[R] .fi -.SS Specifying the remote .PP -In normal use, make sure the remote has a \f[C]:\f[R] in. -If you specify the remote without a \f[C]:\f[R] then rclone will use a -local directory of that name. -So if you use a remote of \f[C]/path/to/secret/files\f[R] then rclone -will chunk stuff in that directory. -If you use a remote of \f[C]name\f[R] then rclone will put files in a -directory called \f[C]name\f[R] in the current directory. -.SS Chunking +No remotes found, make a new one? +n) New remote q) Quit config n/q> n name> remote Type of storage to +configure. +Choose a number from below, or type in your own value [snip] XX / +Backblaze B2 \ \[dq]b2\[dq] [snip] Storage> b2 Account ID or Application +Key ID account> 123456789abc Application Key key> +0123456789abcdef0123456789abcdef0123456789 Endpoint for the service - +leave blank normally. +endpoint> Remote config -------------------- [remote] account = +123456789abc key = 0123456789abcdef0123456789abcdef0123456789 endpoint = +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this + +See all buckets + + rclone lsd remote: + +Create a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync \[ga]/home/local/directory\[ga] to the remote bucket, deleting any +excess files in the bucket. + + rclone sync --interactive /home/local/directory remote:bucket + +### Application Keys + +B2 supports multiple [Application Keys for different access permission +to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html). + +You can use these with rclone too; you will need to use rclone version 1.43 +or later. + +Follow Backblaze\[aq]s docs to create an Application Key with the required +permission and add the \[ga]applicationKeyId\[ga] as the \[ga]account\[ga] and the +\[ga]Application Key\[ga] itself as the \[ga]key\[ga]. + +Note that you must put the _applicationKeyId_ as the \[ga]account\[ga] \[en] you +can\[aq]t use the master Account ID. If you try then B2 will return 401 +errors. + +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +### Modified time + +The modified time is stored as metadata on the object as +\[ga]X-Bz-Info-src_last_modified_millis\[ga] as milliseconds since 1970-01-01 +in the Backblaze standard. Other tools should be able to use this as +a modified time. + +Modified times are used in syncing and are fully supported. Note that +if a modification time needs to be updated on an object then it will +create a new version of the object. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +Note that in 2020-05 Backblaze started allowing \[rs] characters in file +names. Rclone hasn\[aq]t changed its encoding as this could cause syncs to +re-transfer files. If you want rclone not to replace \[rs] then see the +\[ga]--b2-encoding\[ga] flag below and remove the \[ga]BackSlash\[ga] from the +string. This can be set in the config. + +### SHA1 checksums + +The SHA1 checksums of the files are checked on upload and download and +will be used in the syncing process. + +Large files (bigger than the limit in \[ga]--b2-upload-cutoff\[ga]) which are +uploaded in chunks will store their SHA1 on the object as +\[ga]X-Bz-Info-large_file_sha1\[ga] as recommended by Backblaze. + +For a large file to be uploaded with an SHA1 checksum, the source +needs to support SHA1 checksums. The local disk supports SHA1 +checksums so large file transfers from local disk will have an SHA1. +See [the overview](https://rclone.org/overview/#features) for exactly which remotes +support SHA1. + +Sources which don\[aq]t support SHA1, in particular \[ga]crypt\[ga] will upload +large files without SHA1 checksums. This may be fixed in the future +(see [#1767](https://github.com/rclone/rclone/issues/1767)). + +Files sizes below \[ga]--b2-upload-cutoff\[ga] will always have an SHA1 +regardless of the source. + +### Transfers + +Backblaze recommends that you do lots of transfers simultaneously for +maximum speed. In tests from my SSD equipped laptop the optimum +setting is about \[ga]--transfers 32\[ga] though higher numbers may be used +for a slight speed improvement. The optimum number for you may vary +depending on your hardware, how big the files are, how much you want +to load your computer, etc. The default of \[ga]--transfers 4\[ga] is +definitely too low for Backblaze B2 though. + +Note that uploading big files (bigger than 200 MiB by default) will use +a 96 MiB RAM buffer by default. There can be at most \[ga]--transfers\[ga] of +these in use at any moment, so this sets the upper limit on the memory +used. + +### Versions + +When rclone uploads a new version of a file it creates a [new version +of it](https://www.backblaze.com/b2/docs/file_versions.html). +Likewise when you delete a file, the old version will be marked hidden +and still be available. Conversely, you may opt in to a \[dq]hard delete\[dq] +of files with the \[ga]--b2-hard-delete\[ga] flag which would permanently remove +the file instead of hiding it. + +Old versions of files, where available, are visible using the +\[ga]--b2-versions\[ga] flag. + +It is also possible to view a bucket as it was at a certain point in time, +using the \[ga]--b2-version-at\[ga] flag. This will show the file versions as they +were at that time, showing files that have been deleted afterwards, and +hiding files that were created since. + +If you wish to remove all the old versions then you can use the +\[ga]rclone cleanup remote:bucket\[ga] command which will delete all the old +versions of files, leaving the current ones intact. You can also +supply a path and only old versions under that path will be deleted, +e.g. \[ga]rclone cleanup remote:bucket/path/to/stuff\[ga]. + +Note that \[ga]cleanup\[ga] will remove partially uploaded files from the bucket +if they are more than a day old. + +When you \[ga]purge\[ga] a bucket, the current and the old versions will be +deleted then the bucket will be deleted. + +However \[ga]delete\[ga] will cause the current versions of the files to +become hidden old versions. + +Here is a session showing the listing and retrieval of an old +version followed by a \[ga]cleanup\[ga] of the old versions. + +Show current version and all the versions with \[ga]--b2-versions\[ga] flag. +\f[R] +.fi .PP -When rclone starts a file upload, chunker checks the file size. -If it doesn\[aq]t exceed the configured chunk size, chunker will just -pass the file to the wrapped remote. -If a file is large, chunker will transparently cut data in pieces with -temporary names and stream them one by one, on the fly. -Each data chunk will contain the specified number of bytes, except for -the last one which may have less data. -If file size is unknown in advance (this is called a streaming upload), -chunker will internally create a temporary copy, record its size and -repeat the above process. +$ rclone -q ls b2:cleanup-test 9 one.txt .PP +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 +one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 +one-v2016-07-02-155621-000.txt +.IP +.nf +\f[C] +Retrieve an old version +\f[R] +.fi +.PP +$ rclone -q --b2-versions copy +b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp +.PP +$ ls -l /tmp/one-v2016-07-04-141003-000.txt -rw-rw-r-- 1 ncw ncw 16 Jul +2 17:46 /tmp/one-v2016-07-04-141003-000.txt +.IP +.nf +\f[C] +Clean up all the old versions and show that they\[aq]ve gone. +\f[R] +.fi +.PP +$ rclone -q cleanup b2:cleanup-test +.PP +$ rclone -q ls b2:cleanup-test 9 one.txt +.PP +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt +.IP +.nf +\f[C] +#### Versions naming caveat + +When using \[ga]--b2-versions\[ga] flag rclone is relying on the file name +to work out whether the objects are versions or not. Versions\[aq] names +are created by inserting timestamp between file name and its extension. +\f[R] +.fi +.IP +.nf +\f[C] + 9 file.txt + 8 file-v2023-07-17-161032-000.txt + 16 file-v2023-06-15-141003-000.txt +\f[R] +.fi +.IP +.nf +\f[C] +If there are real files present with the same names as versions, then +behaviour of \[ga]--b2-versions\[ga] can be unpredictable. + +### Data usage + +It is useful to know how many requests are sent to the server in different scenarios. + +All copy commands send the following 4 requests: +\f[R] +.fi +.PP +/b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket +/b2api/v1/b2_list_buckets /b2api/v1/b2_list_file_names +.IP +.nf +\f[C] +The \[ga]b2_list_file_names\[ga] request will be sent once for every 1k files +in the remote path, providing the checksum and modification time of +the listed files. As of version 1.33 issue +[#818](https://github.com/rclone/rclone/issues/818) causes extra requests +to be sent when using B2 with Crypt. When a copy operation does not +require any files to be uploaded, no more requests will be sent. + +Uploading files that do not require chunking, will send 2 requests per +file upload: +\f[R] +.fi +.PP +/b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ +.IP +.nf +\f[C] +Uploading files requiring chunking, will send 2 requests (one each to +start and finish the upload) and another 2 requests for each chunk: +\f[R] +.fi +.PP +/b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url +/b2api/v1/b2_upload_part/ /b2api/v1/b2_finish_large_file +.IP +.nf +\f[C] +#### Versions + +Versions can be viewed with the \[ga]--b2-versions\[ga] flag. When it is set +rclone will show and act on older versions of files. For example + +Listing without \[ga]--b2-versions\[ga] +\f[R] +.fi +.PP +$ rclone -q ls b2:cleanup-test 9 one.txt +.IP +.nf +\f[C] +And with +\f[R] +.fi +.PP +$ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 +one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 +one-v2016-07-02-155621-000.txt +.IP +.nf +\f[C] +Showing that the current version is unchanged but older versions can +be seen. These have the UTC date that they were uploaded to the +server to the nearest millisecond appended to them. + +Note that when using \[ga]--b2-versions\[ga] no file write operations are +permitted, so you can\[aq]t upload files or delete them. + +### B2 and rclone link + +Rclone supports generating file share links for private B2 buckets. +They can either be for a file for example: +\f[R] +.fi +.PP +\&./rclone link B2:bucket/path/to/file.txt +https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx +.IP +.nf +\f[C] +or if run on a directory you will get: +\f[R] +.fi +.PP +\&./rclone link B2:bucket/path +https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx +.IP +.nf +\f[C] +you can then use the authorization token (the part of the url from the + \[ga]?Authorization=\[ga] on) on any file path under that directory. For example: +\f[R] +.fi +.PP +https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx +.IP +.nf +\f[C] + +### Standard options + +Here are the Standard options specific to b2 (Backblaze B2). + +#### --b2-account + +Account ID or Application Key ID. + +Properties: + +- Config: account +- Env Var: RCLONE_B2_ACCOUNT +- Type: string +- Required: true + +#### --b2-key + +Application Key. + +Properties: + +- Config: key +- Env Var: RCLONE_B2_KEY +- Type: string +- Required: true + +#### --b2-hard-delete + +Permanently delete files on remote removal, otherwise hide files. + +Properties: + +- Config: hard_delete +- Env Var: RCLONE_B2_HARD_DELETE +- Type: bool +- Default: false + +### Advanced options + +Here are the Advanced options specific to b2 (Backblaze B2). + +#### --b2-endpoint + +Endpoint for the service. + +Leave blank normally. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_B2_ENDPOINT +- Type: string +- Required: false + +#### --b2-test-mode + +A flag string for X-Bz-Test-Mode header for debugging. + +This is for debugging purposes only. Setting it to one of the strings +below will cause b2 to return specific errors: + + * \[dq]fail_some_uploads\[dq] + * \[dq]expire_some_account_authorization_tokens\[dq] + * \[dq]force_cap_exceeded\[dq] + +These will be set in the \[dq]X-Bz-Test-Mode\[dq] header which is documented +in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html). + +Properties: + +- Config: test_mode +- Env Var: RCLONE_B2_TEST_MODE +- Type: string +- Required: false + +#### --b2-versions + +Include old versions in directory listings. + +Note that when using this no file write operations are permitted, +so you can\[aq]t upload files or delete them. + +Properties: + +- Config: versions +- Env Var: RCLONE_B2_VERSIONS +- Type: bool +- Default: false + +#### --b2-version-at + +Show file versions as they were at the specified time. + +Note that when using this no file write operations are permitted, +so you can\[aq]t upload files or delete them. + +Properties: + +- Config: version_at +- Env Var: RCLONE_B2_VERSION_AT +- Type: Time +- Default: off + +#### --b2-upload-cutoff + +Cutoff for switching to chunked upload. + +Files above this size will be uploaded in chunks of \[dq]--b2-chunk-size\[dq]. + +This value should be set no larger than 4.657 GiB (== 5 GB). + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_B2_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200Mi + +#### --b2-copy-cutoff + +Cutoff for switching to multipart copy. + +Any files larger than this that need to be server-side copied will be +copied in chunks of this size. + +The minimum is 0 and the maximum is 4.6 GiB. + +Properties: + +- Config: copy_cutoff +- Env Var: RCLONE_B2_COPY_CUTOFF +- Type: SizeSuffix +- Default: 4Gi + +#### --b2-chunk-size + +Upload chunk size. + +When uploading large files, chunk the file into this size. + +Must fit in memory. These chunks are buffered in memory and there +might a maximum of \[dq]--transfers\[dq] chunks in progress at once. + +5,000,000 Bytes is the minimum size. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_B2_CHUNK_SIZE +- Type: SizeSuffix +- Default: 96Mi + +#### --b2-upload-concurrency + +Concurrency for multipart uploads. + +This is the number of chunks of the same file that are uploaded +concurrently. + +Note that chunks are stored in memory and there may be up to +\[dq]--transfers\[dq] * \[dq]--b2-upload-concurrency\[dq] chunks stored at once +in memory. + +Properties: + +- Config: upload_concurrency +- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY +- Type: int +- Default: 16 + +#### --b2-disable-checksum + +Disable checksums for large (> upload cutoff) files. + +Normally rclone will calculate the SHA1 checksum of the input before +uploading it so it can add it to metadata on the object. This is great +for data integrity checking but can cause long delays for large files +to start uploading. + +Properties: + +- Config: disable_checksum +- Env Var: RCLONE_B2_DISABLE_CHECKSUM +- Type: bool +- Default: false + +#### --b2-download-url + +Custom endpoint for downloads. + +This is usually set to a Cloudflare CDN URL as Backblaze offers +free egress for data downloaded through the Cloudflare network. +Rclone works with private buckets by sending an \[dq]Authorization\[dq] header. +If the custom endpoint rewrites the requests for authentication, +e.g., in Cloudflare Workers, this header needs to be handled properly. +Leave blank if you want to use the endpoint provided by Backblaze. + +The URL provided here SHOULD have the protocol and SHOULD NOT have +a trailing slash or specify the /file/bucket subpath as rclone will +request files with \[dq]{download_url}/file/{bucket_name}/{path}\[dq]. + +Example: +> https://mysubdomain.mydomain.tld +(No trailing \[dq]/\[dq], \[dq]file\[dq] or \[dq]bucket\[dq]) + +Properties: + +- Config: download_url +- Env Var: RCLONE_B2_DOWNLOAD_URL +- Type: string +- Required: false + +#### --b2-download-auth-duration + +Time before the authorization token will expire in s or suffix ms|s|m|h|d. + +The duration before the download authorization token will expire. +The minimum value is 1 second. The maximum value is one week. + +Properties: + +- Config: download_auth_duration +- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION +- Type: Duration +- Default: 1w + +#### --b2-memory-pool-flush-time + +How often internal memory buffer pools will be flushed. (no longer used) + +Properties: + +- Config: memory_pool_flush_time +- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME +- Type: Duration +- Default: 1m0s + +#### --b2-memory-pool-use-mmap + +Whether to use mmap buffers in internal memory pool. (no longer used) + +Properties: + +- Config: memory_pool_use_mmap +- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP +- Type: bool +- Default: false + +#### --b2-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_B2_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +\[ga]rclone about\[ga] is not supported by the B2 backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Box + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +The initial setup for Box involves getting a token from Box which you +can do either in your browser, or with a config.json downloaded from Box +to use JWT authentication. \[ga]rclone config\[ga] walks you through it. + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Box +\ \[dq]box\[dq] [snip] Storage> box Box App Client Id - leave blank +normally. +client_id> Box App Client Secret - leave blank normally. +client_secret> Box App config.json location Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +box_config_file> Box App Primary Access Token Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +access_token> +.PP +Enter a string value. +Press Enter for the default (\[dq]user\[dq]). +Choose a number from below, or type in your own value 1 / Rclone should +act on behalf of a user \ \[dq]user\[dq] 2 / Rclone should act on behalf +of a service account \ \[dq]enterprise\[dq] box_sub_type> Remote config +Use web browser to automatically authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... +Got code -------------------- [remote] client_id = client_secret = token += +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]XXX\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Box. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your Box + + rclone lsd remote: + +List all the files in your Box + + rclone ls remote: + +To copy a local directory to an Box directory called backup + + rclone copy /home/source remote:backup + +### Using rclone with an Enterprise account with SSO + +If you have an \[dq]Enterprise\[dq] account type with Box with single sign on +(SSO), you need to create a password to use Box with rclone. This can +be done at your Enterprise Box account by going to Settings, \[dq]Account\[dq] +Tab, and then set the password in the \[dq]Authentication\[dq] field. + +Once you have done this, you can setup your Enterprise Box account +using the same procedure detailed above in the, using the password you +have just set. + +### Invalid refresh token + +According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): + +> Each refresh_token is valid for one use in 60 days. + +This means that if you + + * Don\[aq]t use the box remote for 60 days + * Copy the config file with a box refresh token in and use it in two places + * Get an error on a token refresh + +then rclone will return an error which includes the text \[ga]Invalid +refresh token\[ga]. + +To fix this you will need to use oauth2 again to update the refresh +token. You can use the methods in [the remote setup +docs](https://rclone.org/remote_setup/), bearing in mind that if you use the copy the +config file method, you should not use that remote on the computer you +did the authentication on. + +Here is how to do it. +\f[R] +.fi +.PP +$ rclone config Current remotes: +.PP +Name Type ==== ==== remote box +.IP "e)" 3 +Edit existing remote +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an +existing value 1 > remote remote> remote -------------------- [remote] +type = box token = +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2017-07-08T23:40:08.059167677+01:00\[dq]} +-------------------- Edit remote Value \[dq]client_id\[dq] = \[dq]\[dq] +Edit? +(y/n)> +.IP "l)" 3 +Yes +.IP "m)" 3 +No y/n> n Value \[dq]client_secret\[dq] = \[dq]\[dq] Edit? +(y/n)> +.IP "n)" 3 +Yes +.IP "o)" 3 +No y/n> n Remote config Already have a token - refresh? +.IP "p)" 3 +Yes +.IP "q)" 3 +No y/n> y Use web browser to automatically authenticate rclone with +remote? +.IP \[bu] 2 +Say Y if the machine running rclone has a web browser you can use +.IP \[bu] 2 +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +.IP "y)" 3 +Yes +.IP "z)" 3 +No y/n> y If your browser doesn\[aq]t open automatically go to the +following link: http://127.0.0.1:53682/auth Log in and authorize rclone +for access Waiting for code... +Got code -------------------- [remote] type = box token = +{\[dq]access_token\[dq]:\[dq]YYY\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]YYY\[dq],\[dq]expiry\[dq]:\[dq]2017-07-23T12:22:29.259137901+01:00\[dq]} +-------------------- +.IP "a)" 3 +Yes this is OK +.IP "b)" 3 +Edit this remote +.IP "c)" 3 +Delete this remote y/e/d> y +.IP +.nf +\f[C] +### Modified time and hashes + +Box allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. + +Box supports SHA1 type hashes, so you can use the \[ga]--checksum\[ga] +flag. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | + +File names can also not end with the following characters. +These only get replaced if they are the last character in the name: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Transfers + +For files above 50 MiB rclone will use a chunked transfer. Rclone will +upload up to \[ga]--transfers\[ga] chunks at the same time (shared among all +the multipart uploads). Chunks are buffered in memory and are +normally 8 MiB so increasing \[ga]--transfers\[ga] will increase memory use. + +### Deleting files + +Depending on the enterprise settings for your user, the item will +either be actually deleted from Box or moved to the trash. + +Emptying the trash is supported via the rclone however cleanup command +however this deletes every trashed file and folder individually so it +may take a very long time. +Emptying the trash via the WebUI does not have this limitation +so it is advised to empty the trash via the WebUI. + +### Root folder ID + +You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory +(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root +of your Box drive. + +Normally you will leave this blank and rclone will determine the +correct root to use itself. + +However you can set this to restrict rclone to a specific folder +hierarchy. + +In order to do this you will have to find the \[ga]Folder ID\[ga] of the +directory you wish rclone to display. This will be the last segment +of the URL when you open the relevant folder in the Box web +interface. + +So if the folder you want rclone to use has a URL which looks like +\[ga]https://app.box.com/folder/11xxxxxxxxx8\[ga] +in the browser, then you use \[ga]11xxxxxxxxx8\[ga] as +the \[ga]root_folder_id\[ga] in the config. + + +### Standard options + +Here are the Standard options specific to box (Box). + +#### --box-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_BOX_CLIENT_ID +- Type: string +- Required: false + +#### --box-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_BOX_CLIENT_SECRET +- Type: string +- Required: false + +#### --box-box-config-file + +Box App config.json location + +Leave blank normally. + +Leading \[ga]\[ti]\[ga] will be expanded in the file name as will environment variables such as \[ga]${RCLONE_CONFIG_DIR}\[ga]. + +Properties: + +- Config: box_config_file +- Env Var: RCLONE_BOX_BOX_CONFIG_FILE +- Type: string +- Required: false + +#### --box-access-token + +Box App Primary Access Token + +Leave blank normally. + +Properties: + +- Config: access_token +- Env Var: RCLONE_BOX_ACCESS_TOKEN +- Type: string +- Required: false + +#### --box-box-sub-type + + + +Properties: + +- Config: box_sub_type +- Env Var: RCLONE_BOX_BOX_SUB_TYPE +- Type: string +- Default: \[dq]user\[dq] +- Examples: + - \[dq]user\[dq] + - Rclone should act on behalf of a user. + - \[dq]enterprise\[dq] + - Rclone should act on behalf of a service account. + +### Advanced options + +Here are the Advanced options specific to box (Box). + +#### --box-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_BOX_TOKEN +- Type: string +- Required: false + +#### --box-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_BOX_AUTH_URL +- Type: string +- Required: false + +#### --box-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_BOX_TOKEN_URL +- Type: string +- Required: false + +#### --box-root-folder-id + +Fill in for rclone to use a non root folder as its starting point. + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_BOX_ROOT_FOLDER_ID +- Type: string +- Default: \[dq]0\[dq] + +#### --box-upload-cutoff + +Cutoff for switching to multipart upload (>= 50 MiB). + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_BOX_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 50Mi + +#### --box-commit-retries + +Max number of times to try committing a multipart file. + +Properties: + +- Config: commit_retries +- Env Var: RCLONE_BOX_COMMIT_RETRIES +- Type: int +- Default: 100 + +#### --box-list-chunk + +Size of listing chunk 1-1000. + +Properties: + +- Config: list_chunk +- Env Var: RCLONE_BOX_LIST_CHUNK +- Type: int +- Default: 1000 + +#### --box-owned-by + +Only show items owned by the login (email address) passed in. + +Properties: + +- Config: owned_by +- Env Var: RCLONE_BOX_OWNED_BY +- Type: string +- Required: false + +#### --box-impersonate + +Impersonate this user ID when using a service account. + +Settng this flag allows rclone, when using a JWT service account, to +act on behalf of another user by setting the as-user header. + +The user ID is the Box identifier for a user. User IDs can found for +any user via the GET /users endpoint, which is only available to +admins, or by calling the GET /users/me endpoint with an authenticated +user session. + +See: https://developer.box.com/guides/authentication/jwt/as-user/ + + +Properties: + +- Config: impersonate +- Env Var: RCLONE_BOX_IMPERSONATE +- Type: string +- Required: false + +#### --box-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_BOX_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot + + + +## Limitations + +Note that Box is case insensitive so you can\[aq]t have a file called +\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + +Box file names can\[aq]t have the \[ga]\[rs]\[ga] character in. rclone maps this to +and from an identical looking unicode equivalent \[ga]\[uFF3C]\[ga] (U+FF3C Fullwidth +Reverse Solidus). + +Box only supports filenames up to 255 characters in length. + +Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone. + +\[ga]rclone about\[ga] is not supported by the Box backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +## Get your own Box App ID + +Here is how to create your own Box App ID for rclone: + +1. Go to the [Box Developer Console](https://app.box.com/developers/console) +and login, then click \[ga]My Apps\[ga] on the sidebar. Click \[ga]Create New App\[ga] +and select \[ga]Custom App\[ga]. + +2. In the first screen on the box that pops up, you can pretty much enter +whatever you want. The \[ga]App Name\[ga] can be whatever. For \[ga]Purpose\[ga] choose +automation to avoid having to fill out anything else. Click \[ga]Next\[ga]. + +3. In the second screen of the creation screen, select +\[ga]User Authentication (OAuth 2.0)\[ga]. Then click \[ga]Create App\[ga]. + +4. You should now be on the \[ga]Configuration\[ga] tab of your new app. If not, +click on it at the top of the webpage. Copy down \[ga]Client ID\[ga] +and \[ga]Client Secret\[ga], you\[aq]ll need those for rclone. + +5. Under \[dq]OAuth 2.0 Redirect URI\[dq], add \[ga]http://127.0.0.1:53682/\[ga] + +6. For \[ga]Application Scopes\[ga], select \[ga]Read all files and folders stored in Box\[ga] +and \[ga]Write all files and folders stored in box\[ga] (assuming you want to do both). +Leave others unchecked. Click \[ga]Save Changes\[ga] at the top right. + +# Cache + +The \[ga]cache\[ga] remote wraps another existing remote and stores file structure +and its data for long running tasks like \[ga]rclone mount\[ga]. + +## Status + +The cache backend code is working but it currently doesn\[aq]t +have a maintainer so there are [outstanding bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug+label%3A%22Remote%3A+Cache%22) which aren\[aq]t getting fixed. + +The cache backend is due to be phased out in favour of the VFS caching +layer eventually which is more tightly integrated into rclone. + +Until this happens we recommend only using the cache backend if you +find you can\[aq]t work without it. There are many docs online describing +the use of the cache backend to minimize API hits and by-and-large +these are out of date and the cache backend isn\[aq]t needed in those +scenarios any more. + +## Configuration + +To get started you just need to have an existing remote which can be configured +with \[ga]cache\[ga]. + +Here is an example of how to make a remote called \[ga]test-cache\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote r) Rename remote c) Copy remote s) Set configuration +password q) Quit config n/r/c/s/q> n name> test-cache Type of storage to +configure. +Choose a number from below, or type in your own value [snip] XX / Cache +a remote \ \[dq]cache\[dq] [snip] Storage> cache Remote to cache. +Normally should contain a \[aq]:\[aq] and a path, e.g. +\[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or maybe +\[dq]myremote:\[dq] (not recommended). +remote> local:/test Optional: The URL of the Plex server plex_url> +http://127.0.0.1:32400 Optional: The username of the Plex user +plex_username> dummyusername Optional: The password of the Plex user y) +Yes type in my own password g) Generate random password n) No leave this +optional password blank y/g/n> y Enter the password: password: Confirm +the password: password: The size of a chunk. +Lower value good for slow connections but can affect seamless reading. +Default: 5M Choose a number from below, or type in your own value 1 / 1 +MiB \ \[dq]1M\[dq] 2 / 5 MiB \ \[dq]5M\[dq] 3 / 10 MiB \ \[dq]10M\[dq] +chunk_size> 2 How much time should object info (file size, file hashes, +etc.) be stored in cache. +Use a very high value if you don\[aq]t plan on changing the source FS +from outside the cache. +Accepted units are: \[dq]s\[dq], \[dq]m\[dq], \[dq]h\[dq]. +Default: 5m Choose a number from below, or type in your own value 1 / 1 +hour \ \[dq]1h\[dq] 2 / 24 hours \ \[dq]24h\[dq] 3 / 24 hours +\ \[dq]48h\[dq] info_age> 2 The maximum size of stored chunks. +When the storage grows beyond this size, the oldest chunks will be +deleted. +Default: 10G Choose a number from below, or type in your own value 1 / +500 MiB \ \[dq]500M\[dq] 2 / 1 GiB \ \[dq]1G\[dq] 3 / 10 GiB +\ \[dq]10G\[dq] chunk_total_size> 3 Remote config -------------------- +[test-cache] remote = local:/test plex_url = http://127.0.0.1:32400 +plex_username = dummyusername plex_password = *** ENCRYPTED *** +chunk_size = 5M info_age = 48h chunk_total_size = 10G +.IP +.nf +\f[C] +You can then use it like this, + +List directories in top level of your drive + + rclone lsd test-cache: + +List all the files in your drive + + rclone ls test-cache: + +To start a cached mount + + rclone mount --allow-other test-cache: /var/tmp/test-cache + +### Write Features ### + +### Offline uploading ### + +In an effort to make writing through cache more reliable, the backend +now supports this feature which can be activated by specifying a +\[ga]cache-tmp-upload-path\[ga]. + +A files goes through these states when using this feature: + +1. An upload is started (usually by copying a file on the cache remote) +2. When the copy to the temporary location is complete the file is part +of the cached remote and looks and behaves like any other file (reading included) +3. After \[ga]cache-tmp-wait-time\[ga] passes and the file is next in line, \[ga]rclone move\[ga] +is used to move the file to the cloud provider +4. Reading the file still works during the upload but most modifications on it will be prohibited +5. Once the move is complete the file is unlocked for modifications as it +becomes as any other regular file +6. If the file is being read through \[ga]cache\[ga] when it\[aq]s actually +deleted from the temporary path then \[ga]cache\[ga] will simply swap the source +to the cloud provider without interrupting the reading (small blip can happen though) + +Files are uploaded in sequence and only one file is uploaded at a time. +Uploads will be stored in a queue and be processed based on the order they were added. +The queue and the temporary storage is persistent across restarts but +can be cleared on startup with the \[ga]--cache-db-purge\[ga] flag. + +### Write Support ### + +Writes are supported through \[ga]cache\[ga]. +One caveat is that a mounted cache remote does not add any retry or fallback +mechanism to the upload operation. This will depend on the implementation +of the wrapped remote. Consider using \[ga]Offline uploading\[ga] for reliable writes. + +One special case is covered with \[ga]cache-writes\[ga] which will cache the file +data at the same time as the upload when it is enabled making it available +from the cache store immediately once the upload is finished. + +### Read Features ### + +#### Multiple connections #### + +To counter the high latency between a local PC where rclone is running +and cloud providers, the cache remote can split multiple requests to the +cloud provider for smaller file chunks and combines them together locally +where they can be available almost immediately before the reader usually +needs them. + +This is similar to buffering when media files are played online. Rclone +will stay around the current marker but always try its best to stay ahead +and prepare the data before. + +#### Plex Integration #### + +There is a direct integration with Plex which allows cache to detect during reading +if the file is in playback or not. This helps cache to adapt how it queries +the cloud provider depending on what is needed for. + +Scans will have a minimum amount of workers (1) while in a confirmed playback cache +will deploy the configured number of workers. + +This integration opens the doorway to additional performance improvements +which will be explored in the near future. + +**Note:** If Plex options are not configured, \[ga]cache\[ga] will function with its +configured options without adapting any of its settings. + +How to enable? Run \[ga]rclone config\[ga] and add all the Plex options (endpoint, username +and password) in your remote and it will be automatically enabled. + +Affected settings: +- \[ga]cache-workers\[ga]: _Configured value_ during confirmed playback or _1_ all the other times + +##### Certificate Validation ##### + +When the Plex server is configured to only accept secure connections, it is +possible to use \[ga].plex.direct\[ga] URLs to ensure certificate validation succeeds. +These URLs are used by Plex internally to connect to the Plex server securely. + +The format for these URLs is the following: + +\[ga]https://ip-with-dots-replaced.server-hash.plex.direct:32400/\[ga] + +The \[ga]ip-with-dots-replaced\[ga] part can be any IPv4 address, where the dots +have been replaced with dashes, e.g. \[ga]127.0.0.1\[ga] becomes \[ga]127-0-0-1\[ga]. + +To get the \[ga]server-hash\[ga] part, the easiest way is to visit + +https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token + +This page will list all the available Plex servers for your account +with at least one \[ga].plex.direct\[ga] link for each. Copy one URL and replace +the IP address with the desired address. This can be used as the +\[ga]plex_url\[ga] value. + +### Known issues ### + +#### Mount and --dir-cache-time #### + +--dir-cache-time controls the first layer of directory caching which works at the mount layer. +Being an independent caching mechanism from the \[ga]cache\[ga] backend, it will manage its own entries +based on the configured time. + +To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct +one, try to set \[ga]--dir-cache-time\[ga] to a lower time than \[ga]--cache-info-age\[ga]. Default values are +already configured in this way. + +#### Windows support - Experimental #### + +There are a couple of issues with Windows \[ga]mount\[ga] functionality that still require some investigations. +It should be considered as experimental thus far as fixes come in for this OS. + +Most of the issues seem to be related to the difference between filesystems +on Linux flavors and Windows as cache is heavily dependent on them. + +Any reports or feedback on how cache behaves on this OS is greatly appreciated. + +- https://github.com/rclone/rclone/issues/1935 +- https://github.com/rclone/rclone/issues/1907 +- https://github.com/rclone/rclone/issues/1834 + +#### Risk of throttling #### + +Future iterations of the cache backend will make use of the pooling functionality +of the cloud provider to synchronize and at the same time make writing through it +more tolerant to failures. + +There are a couple of enhancements in track to add these but in the meantime +there is a valid concern that the expiring cache listings can lead to cloud provider +throttles or bans due to repeated queries on it for very large mounts. + +Some recommendations: +- don\[aq]t use a very small interval for entry information (\[ga]--cache-info-age\[ga]) +- while writes aren\[aq]t yet optimised, you can still write through \[ga]cache\[ga] which gives you the advantage +of adding the file in the cache at the same time if configured to do so. + +Future enhancements: + +- https://github.com/rclone/rclone/issues/1937 +- https://github.com/rclone/rclone/issues/1936 + +#### cache and crypt #### + +One common scenario is to keep your data encrypted in the cloud provider +using the \[ga]crypt\[ga] remote. \[ga]crypt\[ga] uses a similar technique to wrap around +an existing remote and handles this translation in a seamless way. + +There is an issue with wrapping the remotes in this order: +**cloud remote** -> **crypt** -> **cache** + +During testing, I experienced a lot of bans with the remotes in this order. +I suspect it might be related to how crypt opens files on the cloud provider +which makes it think we\[aq]re downloading the full file instead of small chunks. +Organizing the remotes in this order yields better results: +**cloud remote** -> **cache** -> **crypt** + +#### absolute remote paths #### + +\[ga]cache\[ga] can not differentiate between relative and absolute paths for the wrapped remote. +Any path given in the \[ga]remote\[ga] config setting and on the command line will be passed to +the wrapped remote as is, but for storing the chunks on disk the path will be made +relative by removing any leading \[ga]/\[ga] character. + +This behavior is irrelevant for most backend types, but there are backends where a leading \[ga]/\[ga] +changes the effective directory, e.g. in the \[ga]sftp\[ga] backend paths starting with a \[ga]/\[ga] are +relative to the root of the SSH server and paths without are relative to the user home directory. +As a result \[ga]sftp:bin\[ga] and \[ga]sftp:/bin\[ga] will share the same cache folder, even if they represent +a different directory on the SSH server. + +### Cache and Remote Control (--rc) ### +Cache supports the new \[ga]--rc\[ga] mode in rclone and can be remote controlled through the following end points: +By default, the listener is disabled if you do not add the flag. + +### rc cache/expire +Purge a remote from the cache backend. Supports either a directory or a file. +It supports both encrypted and unencrypted file names if cache is wrapped by crypt. + +Params: + - **remote** = path to remote **(required)** + - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ + + +### Standard options + +Here are the Standard options specific to cache (Cache a remote). + +#### --cache-remote + +Remote to cache. + +Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir\[dq], +\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). + +Properties: + +- Config: remote +- Env Var: RCLONE_CACHE_REMOTE +- Type: string +- Required: true + +#### --cache-plex-url + +The URL of the Plex server. + +Properties: + +- Config: plex_url +- Env Var: RCLONE_CACHE_PLEX_URL +- Type: string +- Required: false + +#### --cache-plex-username + +The username of the Plex user. + +Properties: + +- Config: plex_username +- Env Var: RCLONE_CACHE_PLEX_USERNAME +- Type: string +- Required: false + +#### --cache-plex-password + +The password of the Plex user. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: plex_password +- Env Var: RCLONE_CACHE_PLEX_PASSWORD +- Type: string +- Required: false + +#### --cache-chunk-size + +The size of a chunk (partial file data). + +Use lower numbers for slower connections. If the chunk size is +changed, any downloaded chunks will be invalid and cache-chunk-path +will need to be cleared or unexpected EOF errors will occur. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_CACHE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 5Mi +- Examples: + - \[dq]1M\[dq] + - 1 MiB + - \[dq]5M\[dq] + - 5 MiB + - \[dq]10M\[dq] + - 10 MiB + +#### --cache-info-age + +How long to cache file structure information (directory listings, file size, times, etc.). +If all write operations are done through the cache then you can safely make +this value very large as the cache store will also be updated in real time. + +Properties: + +- Config: info_age +- Env Var: RCLONE_CACHE_INFO_AGE +- Type: Duration +- Default: 6h0m0s +- Examples: + - \[dq]1h\[dq] + - 1 hour + - \[dq]24h\[dq] + - 24 hours + - \[dq]48h\[dq] + - 48 hours + +#### --cache-chunk-total-size + +The total size that the chunks can take up on the local disk. + +If the cache exceeds this value then it will start to delete the +oldest chunks until it goes under this value. + +Properties: + +- Config: chunk_total_size +- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE +- Type: SizeSuffix +- Default: 10Gi +- Examples: + - \[dq]500M\[dq] + - 500 MiB + - \[dq]1G\[dq] + - 1 GiB + - \[dq]10G\[dq] + - 10 GiB + +### Advanced options + +Here are the Advanced options specific to cache (Cache a remote). + +#### --cache-plex-token + +The plex token for authentication - auto set normally. + +Properties: + +- Config: plex_token +- Env Var: RCLONE_CACHE_PLEX_TOKEN +- Type: string +- Required: false + +#### --cache-plex-insecure + +Skip all certificate verification when connecting to the Plex server. + +Properties: + +- Config: plex_insecure +- Env Var: RCLONE_CACHE_PLEX_INSECURE +- Type: string +- Required: false + +#### --cache-db-path + +Directory to store file structure metadata DB. + +The remote name is used as the DB file name. + +Properties: + +- Config: db_path +- Env Var: RCLONE_CACHE_DB_PATH +- Type: string +- Default: \[dq]$HOME/.cache/rclone/cache-backend\[dq] + +#### --cache-chunk-path + +Directory to cache chunk files. + +Path to where partial file data (chunks) are stored locally. The remote +name is appended to the final path. + +This config follows the \[dq]--cache-db-path\[dq]. If you specify a custom +location for \[dq]--cache-db-path\[dq] and don\[aq]t specify one for \[dq]--cache-chunk-path\[dq] +then \[dq]--cache-chunk-path\[dq] will use the same path as \[dq]--cache-db-path\[dq]. + +Properties: + +- Config: chunk_path +- Env Var: RCLONE_CACHE_CHUNK_PATH +- Type: string +- Default: \[dq]$HOME/.cache/rclone/cache-backend\[dq] + +#### --cache-db-purge + +Clear all the cached data for this remote on start. + +Properties: + +- Config: db_purge +- Env Var: RCLONE_CACHE_DB_PURGE +- Type: bool +- Default: false + +#### --cache-chunk-clean-interval + +How often should the cache perform cleanups of the chunk storage. + +The default value should be ok for most people. If you find that the +cache goes over \[dq]cache-chunk-total-size\[dq] too often then try to lower +this value to force it to perform cleanups more often. + +Properties: + +- Config: chunk_clean_interval +- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL +- Type: Duration +- Default: 1m0s + +#### --cache-read-retries + +How many times to retry a read from a cache storage. + +Since reading from a cache stream is independent from downloading file +data, readers can get to a point where there\[aq]s no more data in the +cache. Most of the times this can indicate a connectivity issue if +cache isn\[aq]t able to provide file data anymore. + +For really slow connections, increase this to a point where the stream is +able to provide data but your experience will be very stuttering. + +Properties: + +- Config: read_retries +- Env Var: RCLONE_CACHE_READ_RETRIES +- Type: int +- Default: 10 + +#### --cache-workers + +How many workers should run in parallel to download chunks. + +Higher values will mean more parallel processing (better CPU needed) +and more concurrent requests on the cloud provider. This impacts +several aspects like the cloud provider API limits, more stress on the +hardware that rclone runs on but it also means that streams will be +more fluid and data will be available much more faster to readers. + +**Note**: If the optional Plex integration is enabled then this +setting will adapt to the type of reading performed and the value +specified here will be used as a maximum number of workers to use. + +Properties: + +- Config: workers +- Env Var: RCLONE_CACHE_WORKERS +- Type: int +- Default: 4 + +#### --cache-chunk-no-memory + +Disable the in-memory cache for storing chunks during streaming. + +By default, cache will keep file data during streaming in RAM as well +to provide it to readers as fast as possible. + +This transient data is evicted as soon as it is read and the number of +chunks stored doesn\[aq]t exceed the number of workers. However, depending +on other settings like \[dq]cache-chunk-size\[dq] and \[dq]cache-workers\[dq] this footprint +can increase if there are parallel streams too (multiple files being read +at the same time). + +If the hardware permits it, use this feature to provide an overall better +performance during streaming but it can also be disabled if RAM is not +available on the local machine. + +Properties: + +- Config: chunk_no_memory +- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY +- Type: bool +- Default: false + +#### --cache-rps + +Limits the number of requests per second to the source FS (-1 to disable). + +This setting places a hard limit on the number of requests per second +that cache will be doing to the cloud provider remote and try to +respect that value by setting waits between reads. + +If you find that you\[aq]re getting banned or limited on the cloud +provider through cache and know that a smaller number of requests per +second will allow you to work with it then you can use this setting +for that. + +A good balance of all the other settings should make this setting +useless but it is available to set for more special cases. + +**NOTE**: This will limit the number of requests during streams but +other API calls to the cloud provider like directory listings will +still pass. + +Properties: + +- Config: rps +- Env Var: RCLONE_CACHE_RPS +- Type: int +- Default: -1 + +#### --cache-writes + +Cache file data on writes through the FS. + +If you need to read files immediately after you upload them through +cache you can enable this flag to have their data stored in the +cache store at the same time during upload. + +Properties: + +- Config: writes +- Env Var: RCLONE_CACHE_WRITES +- Type: bool +- Default: false + +#### --cache-tmp-upload-path + +Directory to keep temporary files until they are uploaded. + +This is the path where cache will use as a temporary storage for new +files that need to be uploaded to the cloud provider. + +Specifying a value will enable this feature. Without it, it is +completely disabled and files will be uploaded directly to the cloud +provider + +Properties: + +- Config: tmp_upload_path +- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH +- Type: string +- Required: false + +#### --cache-tmp-wait-time + +How long should files be stored in local cache before being uploaded. + +This is the duration that a file must wait in the temporary location +_cache-tmp-upload-path_ before it is selected for upload. + +Note that only one file is uploaded at a time and it can take longer +to start the upload if a queue formed for this purpose. + +Properties: + +- Config: tmp_wait_time +- Env Var: RCLONE_CACHE_TMP_WAIT_TIME +- Type: Duration +- Default: 15s + +#### --cache-db-wait-time + +How long to wait for the DB to be available - 0 is unlimited. + +Only one process can have the DB open at any one time, so rclone waits +for this duration for the DB to become available before it gives an +error. + +If you set it to 0 then it will wait forever. + +Properties: + +- Config: db_wait_time +- Env Var: RCLONE_CACHE_DB_WAIT_TIME +- Type: Duration +- Default: 1s + +## Backend commands + +Here are the commands specific to the cache backend. + +Run them with + + rclone backend COMMAND remote: + +The help below will explain what arguments each command takes. + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + +These can be run on a running backend using the rc command +[backend/command](https://rclone.org/rc/#backend-command). + +### stats + +Print stats on the cache backend in JSON format. + + rclone backend stats remote: [options] [+] + + + +# Chunker + +The \[ga]chunker\[ga] overlay transparently splits large files into smaller chunks +during upload to wrapped remote and transparently assembles them back +when the file is downloaded. This allows to effectively overcome size limits +imposed by storage providers. + +## Configuration + +To use it, first set up the underlying remote following the configuration +instructions for that remote. You can also use a local pathname instead of +a remote. + +First check your chosen remote is working - we\[aq]ll call it \[ga]remote:path\[ga] here. +Note that anything inside \[ga]remote:path\[ga] will be chunked and anything outside +won\[aq]t. This means that if you are using a bucket-based remote (e.g. S3, B2, swift) +then you should probably put the bucket in the remote \[ga]s3:bucket\[ga]. + +Now configure \[ga]chunker\[ga] using \[ga]rclone config\[ga]. We will call this one \[ga]overlay\[ga] +to separate it from the \[ga]remote\[ga] itself. +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> overlay Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +Transparently chunk/split large files \ \[dq]chunker\[dq] [snip] +Storage> chunker Remote to chunk/unchunk. +Normally should contain a \[aq]:\[aq] and a path, e.g. +\[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or maybe +\[dq]myremote:\[dq] (not recommended). +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +remote> remote:path Files larger than chunk size will be split in +chunks. +Enter a size with suffix K,M,G,T. +Press Enter for the default (\[dq]2G\[dq]). +chunk_size> 100M Choose how chunker handles hash sums. +All modes but \[dq]none\[dq] require metadata. +Enter a string value. +Press Enter for the default (\[dq]md5\[dq]). +Choose a number from below, or type in your own value 1 / Pass any hash +supported by wrapped remote for non-chunked files, return nothing +otherwise \ \[dq]none\[dq] 2 / MD5 for composite files \ \[dq]md5\[dq] 3 +/ SHA1 for composite files \ \[dq]sha1\[dq] 4 / MD5 for all files +\ \[dq]md5all\[dq] 5 / SHA1 for all files \ \[dq]sha1all\[dq] 6 / +Copying a file to chunker will request MD5 from the source falling back +to SHA1 if unsupported \ \[dq]md5quick\[dq] 7 / Similar to +\[dq]md5quick\[dq] but prefers SHA1 over MD5 \ \[dq]sha1quick\[dq] +hash_type> md5 Edit advanced config? +(y/n) y) Yes n) No y/n> n Remote config -------------------- [overlay] +type = chunker remote = remote:bucket chunk_size = 100M hash_type = md5 +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +### Specifying the remote + +In normal use, make sure the remote has a \[ga]:\[ga] in. If you specify the remote +without a \[ga]:\[ga] then rclone will use a local directory of that name. +So if you use a remote of \[ga]/path/to/secret/files\[ga] then rclone will +chunk stuff in that directory. If you use a remote of \[ga]name\[ga] then rclone +will put files in a directory called \[ga]name\[ga] in the current directory. + + +### Chunking + +When rclone starts a file upload, chunker checks the file size. If it +doesn\[aq]t exceed the configured chunk size, chunker will just pass the file +to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut +data in pieces with temporary names and stream them one by one, on the fly. +Each data chunk will contain the specified number of bytes, except for the +last one which may have less data. If file size is unknown in advance +(this is called a streaming upload), chunker will internally create +a temporary copy, record its size and repeat the above process. + When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. -A similar method with hidden temporary chunks is used for other -operations (copy/move/rename, etc.). -If an operation fails, hidden chunks are normally destroyed, and the -target composite file stays intact. -.PP +A similar method with hidden temporary chunks is used for other operations +(copy/move/rename, etc.). If an operation fails, hidden chunks are normally +destroyed, and the target composite file stays intact. + When a composite file download is requested, chunker transparently -assembles it by concatenating data chunks in order. -As the split is trivial one could even manually concatenate data chunks -together to obtain the original content. -.PP -When the \f[C]list\f[R] rclone command scans a directory on wrapped -remote, the potential chunk files are accounted for, grouped and -assembled into composite directory entries. -Any temporary chunks are hidden. -.PP +assembles it by concatenating data chunks in order. As the split is trivial +one could even manually concatenate data chunks together to obtain the +original content. + +When the \[ga]list\[ga] rclone command scans a directory on wrapped remote, +the potential chunk files are accounted for, grouped and assembled into +composite directory entries. Any temporary chunks are hidden. + List and other commands can sometimes come across composite files with -missing or invalid chunks, e.g. -shadowed by like-named directory or another file. -This usually means that wrapped file system has been directly tampered -with or damaged. -If chunker detects a missing chunk it will by default print warning, -skip the whole incomplete group of chunks but proceed with current -command. -You can set the \f[C]--chunker-fail-hard\f[R] flag to have commands -abort with error message in such cases. -.SS Chunk names -.PP -The default chunk name format is \f[C]*.rclone_chunk.###\f[R], hence by -default chunk names are \f[C]BIG_FILE_NAME.rclone_chunk.001\f[R], -\f[C]BIG_FILE_NAME.rclone_chunk.002\f[R] etc. -You can configure another name format using the \f[C]name_format\f[R] -configuration file option. -The format uses asterisk \f[C]*\f[R] as a placeholder for the base file -name and one or more consecutive hash characters \f[C]#\f[R] as a -placeholder for sequential chunk number. -There must be one and only one asterisk. -The number of consecutive hash characters defines the minimum length of -a string representing a chunk number. +missing or invalid chunks, e.g. shadowed by like-named directory or +another file. This usually means that wrapped file system has been directly +tampered with or damaged. If chunker detects a missing chunk it will +by default print warning, skip the whole incomplete group of chunks but +proceed with current command. +You can set the \[ga]--chunker-fail-hard\[ga] flag to have commands abort with +error message in such cases. + +**Caveat**: As it is now, chunker will always create a temporary file in the +backend and then rename it, even if the file is below the chunk threshold. +This will result in unnecessary API calls and can severely restrict throughput +when handling transfers primarily composed of small files on some backends (e.g. Box). +A workaround to this issue is to use chunker only for files above the chunk threshold +via \[ga]--min-size\[ga] and then perform a separate call without chunker on the remaining +files. + + +#### Chunk names + +The default chunk name format is \[ga]*.rclone_chunk.###\[ga], hence by default +chunk names are \[ga]BIG_FILE_NAME.rclone_chunk.001\[ga], +\[ga]BIG_FILE_NAME.rclone_chunk.002\[ga] etc. You can configure another name format +using the \[ga]name_format\[ga] configuration file option. The format uses asterisk +\[ga]*\[ga] as a placeholder for the base file name and one or more consecutive +hash characters \[ga]#\[ga] as a placeholder for sequential chunk number. +There must be one and only one asterisk. The number of consecutive hash +characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is -left-padded by zeros. -If the decimal string is longer, it is left intact. -By default numbering starts from 1 but there is another option that -allows user to start from 0, e.g. -for compatibility with legacy software. -.PP -For example, if name format is \f[C]big_*-##.part\f[R] and original file -name is \f[C]data.txt\f[R] and numbering starts from 0, then the first -chunk will be named \f[C]big_data.txt-00.part\f[R], the 99th chunk will -be \f[C]big_data.txt-98.part\f[R] and the 302nd chunk will become -\f[C]big_data.txt-301.part\f[R]. -.PP -Note that \f[C]list\f[R] assembles composite directory entries only when -chunk names match the configured format and treats non-conforming file -names as normal non-chunked files. -.PP -When using \f[C]norename\f[R] transactions, chunk names will -additionally have a unique file version suffix. -For example, \f[C]BIG_FILE_NAME.rclone_chunk.001_bp562k\f[R]. -.SS Metadata -.PP -Besides data chunks chunker will by default create metadata object for a -composite file. -The object is named after the original file. -Chunker allows user to disable metadata completely (the \f[C]none\f[R] -format). +left-padded by zeros. If the decimal string is longer, it is left intact. +By default numbering starts from 1 but there is another option that allows +user to start from 0, e.g. for compatibility with legacy software. + +For example, if name format is \[ga]big_*-##.part\[ga] and original file name is +\[ga]data.txt\[ga] and numbering starts from 0, then the first chunk will be named +\[ga]big_data.txt-00.part\[ga], the 99th chunk will be \[ga]big_data.txt-98.part\[ga] +and the 302nd chunk will become \[ga]big_data.txt-301.part\[ga]. + +Note that \[ga]list\[ga] assembles composite directory entries only when chunk names +match the configured format and treats non-conforming file names as normal +non-chunked files. + +When using \[ga]norename\[ga] transactions, chunk names will additionally have a unique +file version suffix. For example, \[ga]BIG_FILE_NAME.rclone_chunk.001_bp562k\[ga]. + + +### Metadata + +Besides data chunks chunker will by default create metadata object for +a composite file. The object is named after the original file. +Chunker allows user to disable metadata completely (the \[ga]none\[ga] format). Note that metadata is normally not created for files smaller than the -configured chunk size. -This may change in future rclone releases. -.SS Simple JSON metadata format -.PP -This is the default format. -It supports hash sums and chunk validation for composite files. -Meta objects carry the following fields: -.IP \[bu] 2 -\f[C]ver\f[R] - version of format, currently \f[C]1\f[R] -.IP \[bu] 2 -\f[C]size\f[R] - total size of composite file -.IP \[bu] 2 -\f[C]nchunks\f[R] - number of data chunks in file -.IP \[bu] 2 -\f[C]md5\f[R] - MD5 hashsum of composite file (if present) -.IP \[bu] 2 -\f[C]sha1\f[R] - SHA1 hashsum (if present) -.IP \[bu] 2 -\f[C]txn\f[R] - identifies current version of the file -.PP -There is no field for composite file name as it\[aq]s simply equal to -the name of meta object on the wrapped remote. -Please refer to respective sections for details on hashsums and modified -time handling. -.SS No metadata -.PP -You can disable meta objects by setting the meta format option to -\f[C]none\f[R]. +configured chunk size. This may change in future rclone releases. + +#### Simple JSON metadata format + +This is the default format. It supports hash sums and chunk validation +for composite files. Meta objects carry the following fields: + +- \[ga]ver\[ga] - version of format, currently \[ga]1\[ga] +- \[ga]size\[ga] - total size of composite file +- \[ga]nchunks\[ga] - number of data chunks in file +- \[ga]md5\[ga] - MD5 hashsum of composite file (if present) +- \[ga]sha1\[ga] - SHA1 hashsum (if present) +- \[ga]txn\[ga] - identifies current version of the file + +There is no field for composite file name as it\[aq]s simply equal to the name +of meta object on the wrapped remote. Please refer to respective sections +for details on hashsums and modified time handling. + +#### No metadata + +You can disable meta objects by setting the meta format option to \[ga]none\[ga]. In this mode chunker will scan directory for all files that follow -configured chunk name format, group them by detecting chunks with the -same base name and show group names as virtual composite files. +configured chunk name format, group them by detecting chunks with the same +base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. -.SS Hashsums -.PP + + +### Hashsums + Chunker supports hashsums only when a compatible metadata is present. -Hence, if you choose metadata format of \f[C]none\f[R], chunker will -report hashsum as \f[C]UNSUPPORTED\f[R]. -.PP +Hence, if you choose metadata format of \[ga]none\[ga], chunker will report hashsum +as \[ga]UNSUPPORTED\[ga]. + Please note that by default metadata is stored only for composite files. -If a file is smaller than configured chunk size, chunker will -transparently redirect hash requests to wrapped remote, so support -depends on that. +If a file is smaller than configured chunk size, chunker will transparently +redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn\[aq]t support it. -.PP + Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the -wrapped remote hash for non-chunked ones, we advise you to choose the -same hash type as supported by wrapped remote so that your file listings +wrapped remote hash for non-chunked ones, we advise you to choose the same +hash type as supported by wrapped remote so that your file listings look coherent. -.PP -If your storage backend does not support MD5 or SHA1 but you need -consistent file hashing, configure chunker with \f[C]md5all\f[R] or -\f[C]sha1all\f[R]. -These two modes guarantee given hash for all files. -If wrapped remote doesn\[aq]t support it, chunker will then add metadata -to all files, even small. -However, this can double the amount of small files in storage and incur -additional service charges. + +If your storage backend does not support MD5 or SHA1 but you need consistent +file hashing, configure chunker with \[ga]md5all\[ga] or \[ga]sha1all\[ga]. These two modes +guarantee given hash for all files. If wrapped remote doesn\[aq]t support it, +chunker will then add metadata to all files, even small. However, this can +double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote -at expense of sidecar meta objects by setting e.g. -\f[C]hash_type=sha1all\f[R] to force hashsums and -\f[C]chunk_size=1P\f[R] to effectively disable chunking. -.PP +at expense of sidecar meta objects by setting e.g. \[ga]hash_type=sha1all\[ga] +to force hashsums and \[ga]chunk_size=1P\[ga] to effectively disable chunking. + Normally, when a file is copied to chunker controlled remote, chunker -will ask the file source for compatible file hash and revert to -on-the-fly calculation if none is found. -This involves some CPU overhead but provides a guarantee that given -hashsum is available. -Also, chunker will reject a server-side copy or move operation if source -and destination hashsum types are different resulting in the extra -network bandwidth, too. -In some rare cases this may be undesired, so chunker provides two -optional choices: \f[C]sha1quick\f[R] and \f[C]md5quick\f[R]. -If the source does not support primary hash type and the quick mode is -enabled, chunker will try to fall back to the secondary type. -This will save CPU and bandwidth but can result in empty hashsums at -destination. -Beware of consequences: the \f[C]sync\f[R] command will revert -(sometimes silently) to time/size comparison if compatible hashsums +will ask the file source for compatible file hash and revert to on-the-fly +calculation if none is found. This involves some CPU overhead but provides +a guarantee that given hashsum is available. Also, chunker will reject +a server-side copy or move operation if source and destination hashsum +types are different resulting in the extra network bandwidth, too. +In some rare cases this may be undesired, so chunker provides two optional +choices: \[ga]sha1quick\[ga] and \[ga]md5quick\[ga]. If the source does not support primary +hash type and the quick mode is enabled, chunker will try to fall back to +the secondary type. This will save CPU and bandwidth but can result in empty +hashsums at destination. Beware of consequences: the \[ga]sync\[ga] command will +revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. -.SS Modified time -.PP + + +### Modified time + Chunker stores modification times using the wrapped remote so support -depends on that. -For a small non-chunked file the chunker overlay simply manipulates -modification time of the wrapped remote file. -For a composite file with metadata chunker will get and set modification -time of the metadata object on the wrapped remote. -If file is chunked but metadata format is \f[C]none\f[R] then chunker -will use modification time of the first data chunk. -.SS Migrations -.PP -The idiomatic way to migrate to a different chunk size, hash type, -transaction style or chunk naming scheme is to: -.IP \[bu] 2 -Collect all your chunked files under a directory and have your chunker -remote point to it. -.IP \[bu] 2 -Create another directory (most probably on the same cloud storage) and -configure a new remote with desired metadata format, hash type, chunk -naming etc. -.IP \[bu] 2 -Now run \f[C]rclone sync --interactive oldchunks: newchunks:\f[R] and -all your data will be transparently converted in transfer. -This may take some time, yet chunker will try server-side copy if -possible. -.IP \[bu] 2 -After checking data integrity you may remove configuration section of -the old remote. -.PP +depends on that. For a small non-chunked file the chunker overlay simply +manipulates modification time of the wrapped remote file. +For a composite file with metadata chunker will get and set +modification time of the metadata object on the wrapped remote. +If file is chunked but metadata format is \[ga]none\[ga] then chunker will +use modification time of the first data chunk. + + +### Migrations + +The idiomatic way to migrate to a different chunk size, hash type, transaction +style or chunk naming scheme is to: + +- Collect all your chunked files under a directory and have your + chunker remote point to it. +- Create another directory (most probably on the same cloud storage) + and configure a new remote with desired metadata format, + hash type, chunk naming etc. +- Now run \[ga]rclone sync --interactive oldchunks: newchunks:\[ga] and all your data + will be transparently converted in transfer. + This may take some time, yet chunker will try server-side + copy if possible. +- After checking data integrity you may remove configuration section + of the old remote. + If rclone gets killed during a long operation on a big composite file, -hidden temporary chunks may stay in the directory. -They will not be shown by the \f[C]list\f[R] command but will eat up -your account quota. -Please note that the \f[C]deletefile\f[R] command deletes only active -chunks of a file. -As a workaround, you can use remote of the wrapped file system to see -them. +hidden temporary chunks may stay in the directory. They will not be +shown by the \[ga]list\[ga] command but will eat up your account quota. +Please note that the \[ga]deletefile\[ga] command deletes only active +chunks of a file. As a workaround, you can use remote of the wrapped +file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. -The \f[C]copy\f[R] command will copy only active chunks while the -\f[C]purge\f[R] will remove everything including garbage. -.SS Caveats and Limitations -.PP -Chunker requires wrapped remote to support server-side \f[C]move\f[R] -(or \f[C]copy\f[R] + \f[C]delete\f[R]) operations, otherwise it will -explicitly refuse to start. -This is because it internally renames temporary chunk files to their -final names when an operation completes successfully. -.PP -Chunker encodes chunk number in file name, so with default -\f[C]name_format\f[R] setting it adds 17 characters. -Also chunker adds 7 characters of temporary suffix during operations. -Many file systems limit base file name without path by 255 characters. -Using rclone\[aq]s crypt remote as a base file system limits file name -by 143 characters. -Thus, maximum name length is 231 for most files and 119 for -chunker-over-crypt. -A user in need can change name format to e.g. -\f[C]*.rcc##\f[R] and save 10 characters (provided at most 99 chunks per -file). -.PP +The \[ga]copy\[ga] command will copy only active chunks while the \[ga]purge\[ga] will +remove everything including garbage. + + +### Caveats and Limitations + +Chunker requires wrapped remote to support server-side \[ga]move\[ga] (or \[ga]copy\[ga] + +\[ga]delete\[ga]) operations, otherwise it will explicitly refuse to start. +This is because it internally renames temporary chunk files to their final +names when an operation completes successfully. + +Chunker encodes chunk number in file name, so with default \[ga]name_format\[ga] +setting it adds 17 characters. Also chunker adds 7 characters of temporary +suffix during operations. Many file systems limit base file name without path +by 255 characters. Using rclone\[aq]s crypt remote as a base file system limits +file name by 143 characters. Thus, maximum name length is 231 for most files +and 119 for chunker-over-crypt. A user in need can change name format to +e.g. \[ga]*.rcc##\[ga] and save 10 characters (provided at most 99 chunks per file). + Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. -.PP + Chunker will not automatically rename existing chunks when you run -\f[C]rclone config\f[R] on a live remote and change the chunk name -format. -Beware that in result of this some files which have been treated as -chunks before the change can pop up in directory listings as normal -files and vice versa. -The same warning holds for the chunk size. +\[ga]rclone config\[ga] on a live remote and change the chunk name format. +Beware that in result of this some files which have been treated as chunks +before the change can pop up in directory listings as normal files +and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above. -.PP + If wrapped remote is case insensitive, the chunker overlay will inherit -that property (so you can\[aq]t have a file called \[dq]Hello.doc\[dq] -and \[dq]hello.doc\[dq] in the same directory). -.PP -Chunker included in rclone releases up to \f[C]v1.54\f[R] can sometimes -fail to detect metadata produced by recent versions of rclone. -We recommend users to keep rclone up-to-date to avoid data corruption. -.PP -Changing \f[C]transactions\f[R] is dangerous and requires explicit -migration. -.SS Standard options -.PP -Here are the Standard options specific to chunker (Transparently -chunk/split large files). -.SS --chunker-remote -.PP +that property (so you can\[aq]t have a file called \[dq]Hello.doc\[dq] and \[dq]hello.doc\[dq] +in the same directory). + +Chunker included in rclone releases up to \[ga]v1.54\[ga] can sometimes fail to +detect metadata produced by recent versions of rclone. We recommend users +to keep rclone up-to-date to avoid data corruption. + +Changing \[ga]transactions\[ga] is dangerous and requires explicit migration. + + +### Standard options + +Here are the Standard options specific to chunker (Transparently chunk/split large files). + +#### --chunker-remote + Remote to chunk/unchunk. -.PP -Normally should contain a \[aq]:\[aq] and a path, e.g. -\[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or maybe -\[dq]myremote:\[dq] (not recommended). -.PP + +Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir\[dq], +\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). + Properties: -.IP \[bu] 2 -Config: remote -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_REMOTE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --chunker-chunk-size -.PP + +- Config: remote +- Env Var: RCLONE_CHUNKER_REMOTE +- Type: string +- Required: true + +#### --chunker-chunk-size + Files larger than chunk size will be split in chunks. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 2Gi -.SS --chunker-hash-type -.PP + +- Config: chunk_size +- Env Var: RCLONE_CHUNKER_CHUNK_SIZE +- Type: SizeSuffix +- Default: 2Gi + +#### --chunker-hash-type + Choose how chunker handles hash sums. -.PP + All modes but \[dq]none\[dq] require metadata. -.PP + Properties: -.IP \[bu] 2 -Config: hash_type -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_HASH_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]md5\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]none\[dq] -.RS 2 -.IP \[bu] 2 -Pass any hash supported by wrapped remote for non-chunked files. -.IP \[bu] 2 -Return nothing otherwise. -.RE -.IP \[bu] 2 -\[dq]md5\[dq] -.RS 2 -.IP \[bu] 2 -MD5 for composite files. -.RE -.IP \[bu] 2 -\[dq]sha1\[dq] -.RS 2 -.IP \[bu] 2 -SHA1 for composite files. -.RE -.IP \[bu] 2 -\[dq]md5all\[dq] -.RS 2 -.IP \[bu] 2 -MD5 for all files. -.RE -.IP \[bu] 2 -\[dq]sha1all\[dq] -.RS 2 -.IP \[bu] 2 -SHA1 for all files. -.RE -.IP \[bu] 2 -\[dq]md5quick\[dq] -.RS 2 -.IP \[bu] 2 -Copying a file to chunker will request MD5 from the source. -.IP \[bu] 2 -Falling back to SHA1 if unsupported. -.RE -.IP \[bu] 2 -\[dq]sha1quick\[dq] -.RS 2 -.IP \[bu] 2 -Similar to \[dq]md5quick\[dq] but prefers SHA1 over MD5. -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to chunker (Transparently -chunk/split large files). -.SS --chunker-name-format -.PP + +- Config: hash_type +- Env Var: RCLONE_CHUNKER_HASH_TYPE +- Type: string +- Default: \[dq]md5\[dq] +- Examples: + - \[dq]none\[dq] + - Pass any hash supported by wrapped remote for non-chunked files. + - Return nothing otherwise. + - \[dq]md5\[dq] + - MD5 for composite files. + - \[dq]sha1\[dq] + - SHA1 for composite files. + - \[dq]md5all\[dq] + - MD5 for all files. + - \[dq]sha1all\[dq] + - SHA1 for all files. + - \[dq]md5quick\[dq] + - Copying a file to chunker will request MD5 from the source. + - Falling back to SHA1 if unsupported. + - \[dq]sha1quick\[dq] + - Similar to \[dq]md5quick\[dq] but prefers SHA1 over MD5. + +### Advanced options + +Here are the Advanced options specific to chunker (Transparently chunk/split large files). + +#### --chunker-name-format + String format of chunk file names. -.PP + The two placeholders are: base file name (*) and chunk number (#...). -There must be one and only one asterisk and one or more consecutive hash -characters. -If chunk number has less digits than the number of hashes, it is -left-padded by zeros. +There must be one and only one asterisk and one or more consecutive hash characters. +If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. -Possible chunk files are ignored if their name does not match given -format. -.PP +Possible chunk files are ignored if their name does not match given format. + Properties: -.IP \[bu] 2 -Config: name_format -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_NAME_FORMAT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]*.rclone_chunk.###\[dq] -.SS --chunker-start-from -.PP -Minimum valid chunk number. -Usually 0 or 1. -.PP + +- Config: name_format +- Env Var: RCLONE_CHUNKER_NAME_FORMAT +- Type: string +- Default: \[dq]*.rclone_chunk.###\[dq] + +#### --chunker-start-from + +Minimum valid chunk number. Usually 0 or 1. + By default chunk numbers start from 1. -.PP + Properties: -.IP \[bu] 2 -Config: start_from -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_START_FROM -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 1 -.SS --chunker-meta-format -.PP + +- Config: start_from +- Env Var: RCLONE_CHUNKER_START_FROM +- Type: int +- Default: 1 + +#### --chunker-meta-format + Format of the metadata object or \[dq]none\[dq]. -.PP + By default \[dq]simplejson\[dq]. Metadata is a small JSON file named after the composite file. -.PP + Properties: -.IP \[bu] 2 -Config: meta_format -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_META_FORMAT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]simplejson\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]none\[dq] -.RS 2 -.IP \[bu] 2 -Do not use metadata files at all. -.IP \[bu] 2 -Requires hash type \[dq]none\[dq]. -.RE -.IP \[bu] 2 -\[dq]simplejson\[dq] -.RS 2 -.IP \[bu] 2 -Simple JSON supports hash sums and chunk validation. -.IP \[bu] 2 -It has the following fields: ver, size, nchunks, md5, sha1. -.RE -.RE -.SS --chunker-fail-hard -.PP + +- Config: meta_format +- Env Var: RCLONE_CHUNKER_META_FORMAT +- Type: string +- Default: \[dq]simplejson\[dq] +- Examples: + - \[dq]none\[dq] + - Do not use metadata files at all. + - Requires hash type \[dq]none\[dq]. + - \[dq]simplejson\[dq] + - Simple JSON supports hash sums and chunk validation. + - + - It has the following fields: ver, size, nchunks, md5, sha1. + +#### --chunker-fail-hard + Choose how chunker should handle files with missing or invalid chunks. -.PP + Properties: -.IP \[bu] 2 -Config: fail_hard -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_FAIL_HARD -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Report errors and abort current command. -.RE -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Warn user, skip incomplete file and proceed. -.RE -.RE -.SS --chunker-transactions -.PP + +- Config: fail_hard +- Env Var: RCLONE_CHUNKER_FAIL_HARD +- Type: bool +- Default: false +- Examples: + - \[dq]true\[dq] + - Report errors and abort current command. + - \[dq]false\[dq] + - Warn user, skip incomplete file and proceed. + +#### --chunker-transactions + Choose how chunker should handle temporary files during transactions. -.PP + Properties: -.IP \[bu] 2 -Config: transactions -.IP \[bu] 2 -Env Var: RCLONE_CHUNKER_TRANSACTIONS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]rename\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]rename\[dq] -.RS 2 -.IP \[bu] 2 -Rename temporary files after a successful transaction. -.RE -.IP \[bu] 2 -\[dq]norename\[dq] -.RS 2 -.IP \[bu] 2 -Leave temporary file names and write transaction ID to metadata file. -.IP \[bu] 2 -Metadata is required for no rename transactions (meta format cannot be -\[dq]none\[dq]). -.IP \[bu] 2 -If you are using norename transactions you should be careful not to -downgrade Rclone -.IP \[bu] 2 -as older versions of Rclone don\[aq]t support this transaction style and -will misinterpret -.IP \[bu] 2 -files manipulated by norename transactions. -.IP \[bu] 2 -This method is EXPERIMENTAL, don\[aq]t use on production systems. -.RE -.IP \[bu] 2 -\[dq]auto\[dq] -.RS 2 -.IP \[bu] 2 -Rename or norename will be used depending on capabilities of the -backend. -.IP \[bu] 2 -If meta format is set to \[dq]none\[dq], rename transactions will always -be used. -.IP \[bu] 2 -This method is EXPERIMENTAL, don\[aq]t use on production systems. -.RE -.RE -.SH Citrix ShareFile -.PP -Citrix ShareFile (https://sharefile.com) is a secure file sharing and -transfer service aimed as business. -.SS Configuration -.PP + +- Config: transactions +- Env Var: RCLONE_CHUNKER_TRANSACTIONS +- Type: string +- Default: \[dq]rename\[dq] +- Examples: + - \[dq]rename\[dq] + - Rename temporary files after a successful transaction. + - \[dq]norename\[dq] + - Leave temporary file names and write transaction ID to metadata file. + - Metadata is required for no rename transactions (meta format cannot be \[dq]none\[dq]). + - If you are using norename transactions you should be careful not to downgrade Rclone + - as older versions of Rclone don\[aq]t support this transaction style and will misinterpret + - files manipulated by norename transactions. + - This method is EXPERIMENTAL, don\[aq]t use on production systems. + - \[dq]auto\[dq] + - Rename or norename will be used depending on capabilities of the backend. + - If meta format is set to \[dq]none\[dq], rename transactions will always be used. + - This method is EXPERIMENTAL, don\[aq]t use on production systems. + + + +# Citrix ShareFile + +[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business. + +## Configuration + The initial setup for Citrix ShareFile involves getting a token from -Citrix ShareFile which you can in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +Citrix ShareFile which you can in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -XX / Citrix Sharefile - \[rs] \[dq]sharefile\[dq] -Storage> sharefile -** See help for sharefile backend at: https://rclone.org/sharefile/ ** - +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value XX / Citrix +Sharefile \ \[dq]sharefile\[dq] Storage> sharefile ** See help for +sharefile backend at: https://rclone.org/sharefile/ ** +.PP ID of the root folder - -Leave blank to access \[dq]Personal Folders\[dq]. You can use one of the -standard values here or any folder ID (long hex number ID). -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Access the Personal Folders. (Default) - \[rs] \[dq]\[dq] - 2 / Access the Favorites folder. - \[rs] \[dq]favorites\[dq] - 3 / Access all the shared folders. - \[rs] \[dq]allshared\[dq] - 4 / Access all the individual connectors. - \[rs] \[dq]connectors\[dq] - 5 / Access the home, favorites, and shared folders as well as the connectors. - \[rs] \[dq]top\[dq] -root_folder_id> -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXX -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -type = sharefile -endpoint = https://XXX.sharefile.com -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2019-09-30T19:41:45.878561877+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Citrix ShareFile. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your ShareFile -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your ShareFile -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to an ShareFile directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Modified time and hashes -.PP -ShareFile allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -.PP -ShareFile supports MD5 type hashes, so you can use the -\f[C]--checksum\f[R] flag. -.SS Transfers -.PP -For files above 128 MiB rclone will use a chunked transfer. -Rclone will upload up to \f[C]--transfers\f[R] chunks at the same time -(shared among all the multipart uploads). -Chunks are buffered in memory and are normally 64 MiB so increasing -\f[C]--transfers\f[R] will increase memory use. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -T{ -* -T}@T{ -0x2A -T}@T{ -\[uFF0A] -T} -T{ -< -T}@T{ -0x3C -T}@T{ -\[uFF1C] -T} -T{ -> -T}@T{ -0x3E -T}@T{ -\[uFF1E] -T} -T{ -? -T}@T{ -0x3F -T}@T{ -\[uFF1F] -T} -T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] -T} -T{ -| -T}@T{ -0x7C -T}@T{ -\[uFF5C] -T} -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -.TE -.PP -File names can also not start or end with the following characters. -These only get replaced if they are the first or last character in the -name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -T{ -\&. -T}@T{ -0x2E -T}@T{ -\[uFF0E] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP -Here are the Standard options specific to sharefile (Citrix Sharefile). -.SS --sharefile-root-folder-id -.PP -ID of the root folder. .PP Leave blank to access \[dq]Personal Folders\[dq]. You can use one of the standard values here or any folder ID (long hex number ID). -.PP +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Access the +Personal Folders. +(Default) \ \[dq]\[dq] 2 / Access the Favorites folder. +\ \[dq]favorites\[dq] 3 / Access all the shared folders. +\ \[dq]allshared\[dq] 4 / Access all the individual connectors. +\ \[dq]connectors\[dq] 5 / Access the home, favorites, and shared +folders as well as the connectors. +\ \[dq]top\[dq] root_folder_id> Edit advanced config? +(y/n) y) Yes n) No y/n> n Remote config Use web browser to automatically +authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth?state=XXX Log in and +authorize rclone for access Waiting for code... +Got code -------------------- [remote] type = sharefile endpoint = +https://XXX.sharefile.com token = +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2019-09-30T19:41:45.878561877+01:00\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Citrix ShareFile. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your ShareFile + + rclone lsd remote: + +List all the files in your ShareFile + + rclone ls remote: + +To copy a local directory to an ShareFile directory called backup + + rclone copy /home/source remote:backup + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +### Modified time and hashes + +ShareFile allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. + +ShareFile supports MD5 type hashes, so you can use the \[ga]--checksum\[ga] +flag. + +### Transfers + +For files above 128 MiB rclone will use a chunked transfer. Rclone will +upload up to \[ga]--transfers\[ga] chunks at the same time (shared among all +the multipart uploads). Chunks are buffered in memory and are +normally 64 MiB so increasing \[ga]--transfers\[ga] will increase memory use. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs]\[rs] | 0x5C | \[uFF3C] | +| * | 0x2A | \[uFF0A] | +| < | 0x3C | \[uFF1C] | +| > | 0x3E | \[uFF1E] | +| ? | 0x3F | \[uFF1F] | +| : | 0x3A | \[uFF1A] | +| \[rs]| | 0x7C | \[uFF5C] | +| \[dq] | 0x22 | \[uFF02] | + +File names can also not start or end with the following characters. +These only get replaced if they are the first or last character in the +name: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | +| . | 0x2E | \[uFF0E] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to sharefile (Citrix Sharefile). + +#### --sharefile-client-id + +OAuth Client Id. + +Leave blank normally. + Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -Access the Personal Folders (default). -.RE -.IP \[bu] 2 -\[dq]favorites\[dq] -.RS 2 -.IP \[bu] 2 -Access the Favorites folder. -.RE -.IP \[bu] 2 -\[dq]allshared\[dq] -.RS 2 -.IP \[bu] 2 -Access all the shared folders. -.RE -.IP \[bu] 2 -\[dq]connectors\[dq] -.RS 2 -.IP \[bu] 2 -Access all the individual connectors. -.RE -.IP \[bu] 2 -\[dq]top\[dq] -.RS 2 -.IP \[bu] 2 -Access the home, favorites, and shared folders as well as the -connectors. -.RE -.RE -.SS Advanced options -.PP + +- Config: client_id +- Env Var: RCLONE_SHAREFILE_CLIENT_ID +- Type: string +- Required: false + +#### --sharefile-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_SHAREFILE_CLIENT_SECRET +- Type: string +- Required: false + +#### --sharefile-root-folder-id + +ID of the root folder. + +Leave blank to access \[dq]Personal Folders\[dq]. You can use one of the +standard values here or any folder ID (long hex number ID). + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - Access the Personal Folders (default). + - \[dq]favorites\[dq] + - Access the Favorites folder. + - \[dq]allshared\[dq] + - Access all the shared folders. + - \[dq]connectors\[dq] + - Access all the individual connectors. + - \[dq]top\[dq] + - Access the home, favorites, and shared folders as well as the connectors. + +### Advanced options + Here are the Advanced options specific to sharefile (Citrix Sharefile). -.SS --sharefile-upload-cutoff -.PP + +#### --sharefile-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_SHAREFILE_TOKEN +- Type: string +- Required: false + +#### --sharefile-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_SHAREFILE_AUTH_URL +- Type: string +- Required: false + +#### --sharefile-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_SHAREFILE_TOKEN_URL +- Type: string +- Required: false + +#### --sharefile-upload-cutoff + Cutoff for switching to multipart upload. -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 128Mi -.SS --sharefile-chunk-size -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 128Mi + +#### --sharefile-chunk-size + Upload chunk size. -.PP + Must a power of 2 >= 256k. -.PP -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. -.PP + +Making this larger will improve performance, but note that each chunk +is buffered in memory one per transfer. + Reducing this will reduce memory usage but decrease performance. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_SHAREFILE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 64Mi -.SS --sharefile-endpoint -.PP + +- Config: chunk_size +- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 64Mi + +#### --sharefile-endpoint + Endpoint for API calls. -.PP -This is usually auto discovered as part of the oauth process, but can be -set manually to something like: https://XXX.sharefile.com -.PP + +This is usually auto discovered as part of the oauth process, but can +be set manually to something like: https://XXX.sharefile.com + + Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_SHAREFILE_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --sharefile-encoding -.PP + +- Config: endpoint +- Env Var: RCLONE_SHAREFILE_ENDPOINT +- Type: string +- Required: false + +#### --sharefile-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_SHAREFILE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot -.SS Limitations -.PP -Note that ShareFile is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP + +- Config: encoding +- Env Var: RCLONE_SHAREFILE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot + + +## Limitations + +Note that ShareFile is case insensitive so you can\[aq]t have a file called +\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + ShareFile only supports filenames up to 256 characters in length. -.PP -\f[C]rclone about\f[R] is not supported by the Citrix ShareFile backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Crypt -.PP -Rclone \f[C]crypt\f[R] remotes encrypt and decrypt other remotes. -.PP -A remote of type \f[C]crypt\f[R] does not access a storage -system (https://rclone.org/overview/) directly, but instead wraps -another remote, which in turn accesses the storage system. -This is similar to how alias (https://rclone.org/alias/), -union (https://rclone.org/union/), chunker (https://rclone.org/chunker/) -and a few others work. -It makes the usage very flexible, as you can add a layer, in this case -an encryption layer, on top of any other backend, even in multiple -layers. -Rclone\[aq]s functionality can be used as with any other remote, for -example you can mount (https://rclone.org/commands/rclone_mount/) a -crypt remote. -.PP + +\[ga]rclone about\[ga] is not supported by the Citrix ShareFile backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Crypt + +Rclone \[ga]crypt\[ga] remotes encrypt and decrypt other remotes. + +A remote of type \[ga]crypt\[ga] does not access a [storage system](https://rclone.org/overview/) +directly, but instead wraps another remote, which in turn accesses +the storage system. This is similar to how [alias](https://rclone.org/alias/), +[union](https://rclone.org/union/), [chunker](https://rclone.org/chunker/) +and a few others work. It makes the usage very flexible, as you can +add a layer, in this case an encryption layer, on top of any other +backend, even in multiple layers. Rclone\[aq]s functionality +can be used as with any other remote, for example you can +[mount](https://rclone.org/commands/rclone_mount/) a crypt remote. + Accessing a storage system through a crypt remote realizes client-side encryption, which makes it safe to keep your data in a location you do not trust will not get compromised. -When working against the \f[C]crypt\f[R] remote, rclone will -automatically encrypt (before uploading) and decrypt (after downloading) -on your local system as needed on the fly, leaving the data encrypted at -rest in the wrapped remote. -If you access the storage system using an application other than rclone, -or access the wrapped remote directly using rclone, there will not be -any encryption/decryption: Downloading existing content will just give -you the encrypted (scrambled) format, and anything you upload will -\f[I]not\f[R] become encrypted. -.PP -The encryption is a secret-key encryption (also called symmetric key -encryption) algorithm, where a password (or pass phrase) is used to -generate real encryption key. +When working against the \[ga]crypt\[ga] remote, rclone will automatically +encrypt (before uploading) and decrypt (after downloading) on your local +system as needed on the fly, leaving the data encrypted at rest in the +wrapped remote. If you access the storage system using an application +other than rclone, or access the wrapped remote directly using rclone, +there will not be any encryption/decryption: Downloading existing content +will just give you the encrypted (scrambled) format, and anything you +upload will *not* become encrypted. + +The encryption is a secret-key encryption (also called symmetric key encryption) +algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone -generate one. -It will be stored in the configuration file, in a lightly obscured form. -If you are in an environment where you are not able to keep your -configuration secured, you should add configuration -encryption (https://rclone.org/docs/#configuration-encryption) as -protection. -As long as you have this configuration file, you will be able to decrypt -your data. -Without the configuration file, as long as you remember the password (or -keep it in a safe place), you can re-create the configuration and gain -access to the existing data. -You may also configure a corresponding remote in a different -installation to access the same data. -See below for guidance to changing password. -.PP -Encryption uses cryptographic -salt (https://en.wikipedia.org/wiki/Salt_(cryptography)), to permute the -encryption key so that the same string may be encrypted in different -ways. -When configuring the crypt remote it is optional to enter a salt, or to -let rclone generate a unique salt. -If omitted, rclone uses a built-in unique string. -Normally in cryptography, the salt is stored together with the encrypted -content, and do not have to be memorized by the user. -This is not the case in rclone, because rclone does not store any -additional information on the remotes. -Use of custom salt is effectively a second password that must be -memorized. -.PP -File content encryption is performed using NaCl -SecretBox (https://godoc.org/golang.org/x/crypto/nacl/secretbox), based -on XSalsa20 cipher and Poly1305 for integrity. -Names (file- and directory names) are also encrypted by default, but -this has some implications and is therefore possible to be turned off. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]secret\f[R]. -.PP -To use \f[C]crypt\f[R], first set up the underlying remote. -Follow the \f[C]rclone config\f[R] instructions for the specific -backend. -.PP +generate one. It will be stored in the configuration file, in a lightly obscured form. +If you are in an environment where you are not able to keep your configuration +secured, you should add +[configuration encryption](https://rclone.org/docs/#configuration-encryption) +as protection. As long as you have this configuration file, you will be able to +decrypt your data. Without the configuration file, as long as you remember +the password (or keep it in a safe place), you can re-create the configuration +and gain access to the existing data. You may also configure a corresponding +remote in a different installation to access the same data. +See below for guidance to [changing password](#changing-password). + +Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)), +to permute the encryption key so that the same string may be encrypted in +different ways. When configuring the crypt remote it is optional to enter a salt, +or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. +Normally in cryptography, the salt is stored together with the encrypted content, +and do not have to be memorized by the user. This is not the case in rclone, +because rclone does not store any additional information on the remotes. Use of +custom salt is effectively a second password that must be memorized. + +[File content](#file-encryption) encryption is performed using +[NaCl SecretBox](https://godoc.org/golang.org/x/crypto/nacl/secretbox), +based on XSalsa20 cipher and Poly1305 for integrity. +[Names](#name-encryption) (file- and directory names) are also encrypted +by default, but this has some implications and is therefore +possible to be turned off. + +## Configuration + +Here is an example of how to make a remote called \[ga]secret\[ga]. + +To use \[ga]crypt\[ga], first set up the underlying remote. Follow the +\[ga]rclone config\[ga] instructions for the specific backend. + Before configuring the crypt remote, check the underlying remote is -working. -In this example the underlying remote is called \f[C]remote\f[R]. -We will configure a path \f[C]path\f[R] within this remote to contain -the encrypted content. -Anything inside \f[C]remote:path\f[R] will be encrypted and anything -outside will not. +working. In this example the underlying remote is called \[ga]remote\[ga]. +We will configure a path \[ga]path\[ga] within this remote to contain the +encrypted content. Anything inside \[ga]remote:path\[ga] will be encrypted +and anything outside will not. + +Configure \[ga]crypt\[ga] using \[ga]rclone config\[ga]. In this example the \[ga]crypt\[ga] +remote is called \[ga]secret\[ga], to differentiate it from the underlying +\[ga]remote\[ga]. + +When you are done you can use the crypt remote named \[ga]secret\[ga] just +as you would with any other remote, e.g. \[ga]rclone copy D:\[rs]docs secret:\[rs]docs\[ga], +and rclone will encrypt and decrypt as needed on the fly. +If you access the wrapped remote \[ga]remote:path\[ga] directly you will bypass +the encryption, and anything you read will be in encrypted form, and +anything you write will be unencrypted. To avoid issues it is best to +configure a dedicated path for encrypted content, and access it +exclusively through a crypt remote. +\f[R] +.fi .PP -Configure \f[C]crypt\f[R] using \f[C]rclone config\f[R]. -In this example the \f[C]crypt\f[R] remote is called \f[C]secret\f[R], -to differentiate it from the underlying \f[C]remote\f[R]. -.PP -When you are done you can use the crypt remote named \f[C]secret\f[R] -just as you would with any other remote, e.g. -\f[C]rclone copy D:\[rs]docs secret:\[rs]docs\f[R], and rclone will -encrypt and decrypt as needed on the fly. -If you access the wrapped remote \f[C]remote:path\f[R] directly you will -bypass the encryption, and anything you read will be in encrypted form, -and anything you write will be unencrypted. -To avoid issues it is best to configure a dedicated path for encrypted -content, and access it exclusively through a crypt remote. -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> secret -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Encrypt/Decrypt a remote - \[rs] \[dq]crypt\[dq] -[snip] -Storage> crypt -** See help for crypt backend at: https://rclone.org/crypt/ ** - -Remote to encrypt/decrypt. -Normally should contain a \[aq]:\[aq] and a path, eg \[dq]myremote:path/to/dir\[dq], -\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). -Enter a string value. Press Enter for the default (\[dq]\[dq]). -remote> remote:path -How to encrypt the filenames. -Enter a string value. Press Enter for the default (\[dq]standard\[dq]). -Choose a number from below, or type in your own value. - / Encrypt the filenames. - 1 | See the docs for the details. - \[rs] \[dq]standard\[dq] - 2 / Very simple filename obfuscation. - \[rs] \[dq]obfuscate\[dq] - / Don\[aq]t encrypt the file names. - 3 | Adds a \[dq].bin\[dq] extension only. - \[rs] \[dq]off\[dq] -filename_encryption> -Option to either encrypt directory names or leave them intact. - -NB If filename_encryption is \[dq]off\[dq] then this option will do nothing. -Enter a boolean value (true or false). Press Enter for the default (\[dq]true\[dq]). -Choose a number from below, or type in your own value - 1 / Encrypt directory names. - \[rs] \[dq]true\[dq] - 2 / Don\[aq]t encrypt directory names, leave them intact. - \[rs] \[dq]false\[dq] -directory_name_encryption> -Password or pass phrase for encryption. -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Password or pass phrase for salt. Optional but recommended. -Should be different to the previous password. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g/n> g -Password strength in bits. -64 is just about memorable -128 is secure -1024 is the maximum -Bits> 128 -Your password is: JAsJvRcgR-_veXNfy_sGmQ -Use this password? Please note that an obscured version of this -password (and not the password itself) will be stored under your -configuration file, so keep this generated password in a safe place. -y) Yes (default) -n) No -y/n> -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> -Remote config --------------------- -[secret] -type = crypt -remote = remote:path -password = *** ENCRYPTED *** -password2 = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> -\f[R] -.fi -.PP -\f[B]Important\f[R] The crypt password stored in \f[C]rclone.conf\f[R] -is lightly obscured. -That only protects it from cursory inspection. -It is not secure unless configuration -encryption (https://rclone.org/docs/#configuration-encryption) of -\f[C]rclone.conf\f[R] is specified. -.PP -A long passphrase is recommended, or \f[C]rclone config\f[R] can -generate a random one. -.PP -The obscured password is created using AES-CTR with a static key. -The salt is stored verbatim at the beginning of the obscured password. -This static key is shared between all versions of rclone. -.PP -If you reconfigure rclone with the same passwords/passphrases elsewhere -it will be compatible, but the obscured version will be different due to -the different salt. -.PP -Rclone does not encrypt -.IP \[bu] 2 -file length - this can be calculated within 16 bytes -.IP \[bu] 2 -modification time - used for syncing -.SS Specifying the remote -.PP -When configuring the remote to encrypt/decrypt, you may specify any -string that rclone accepts as a source/destination of other commands. -.PP -The primary use case is to specify the path into an already configured -remote (e.g. -\f[C]remote:path/to/dir\f[R] or \f[C]remote:bucket\f[R]), such that data -in a remote untrusted location can be stored encrypted. -.PP -You may also specify a local filesystem path, such as -\f[C]/path/to/dir\f[R] on Linux, \f[C]C:\[rs]path\[rs]to\[rs]dir\f[R] on -Windows. -By creating a crypt remote pointing to such a local filesystem path, you -can use rclone as a utility for pure local file encryption, for example -to keep encrypted files on a removable USB drive. -.PP -\f[B]Note\f[R]: A string which do not contain a \f[C]:\f[R] will by -rclone be treated as a relative path in the local filesystem. -For example, if you enter the name \f[C]remote\f[R] without the trailing -\f[C]:\f[R], it will be treated as a subdirectory of the current -directory with name \[dq]remote\[dq]. -.PP -If a path \f[C]remote:path/to/dir\f[R] is specified, rclone stores -encrypted files in \f[C]path/to/dir\f[R] on the remote. -With file name encryption, files saved to -\f[C]secret:subdir/subfile\f[R] are stored in the unencrypted path -\f[C]path/to/dir\f[R] but the \f[C]subdir/subpath\f[R] element is -encrypted. -.PP -The path you specify does not have to exist, rclone will create it when -needed. -.PP -If you intend to use the wrapped remote both directly for keeping -unencrypted content, as well as through a crypt remote for encrypted -content, it is recommended to point the crypt remote to a separate -directory within the wrapped remote. -If you use a bucket-based storage system (e.g. -Swift, S3, Google Compute Storage, B2) it is generally advisable to wrap -the crypt remote around a specific bucket (\f[C]s3:bucket\f[R]). -If wrapping around the entire root of the storage (\f[C]s3:\f[R]), and -use the optional file name encryption, rclone will encrypt the bucket -name. -.SS Changing password -.PP -Should the password, or the configuration file containing a lightly -obscured form of the password, be compromised, you need to re-encrypt -your data with a new password. -Since rclone uses secret-key encryption, where the encryption key is -generated directly from the password kept on the client, it is not -possible to change the password/key of already encrypted content. -Just changing the password configured for an existing crypt remote means -you will no longer able to decrypt any of the previously encrypted -content. -The only possibility is to re-upload everything via a crypt remote -configured with your new password. -.PP -Depending on the size of your data, your bandwidth, storage quota etc, -there are different approaches you can take: - If you have everything in -a different location, for example on your local system, you could remove -all of the prior encrypted files, change the password for your -configured crypt remote (or delete and re-create the crypt -configuration), and then re-upload everything from the alternative -location. -- If you have enough space on the storage system you can create a new -crypt remote pointing to a separate directory on the same backend, and -then use rclone to copy everything from the original crypt remote to the -new, effectively decrypting everything on the fly using the old password -and re-encrypting using the new password. -When done, delete the original crypt remote directory and finally the -rclone crypt configuration with the old password. -All data will be streamed from the storage system and back, so you will -get half the bandwidth and be charged twice if you have upload and -download quota on the storage system. -.PP -\f[B]Note\f[R]: A security problem related to the random password -generator was fixed in rclone version 1.53.3 (released 2020-11-19). -Passwords generated by rclone config in version 1.49.0 (released -2019-08-26) to 1.53.2 (released 2020-10-26) are not considered secure -and should be changed. -If you made up your own password, or used rclone version older than -1.49.0 or newer than 1.53.2 to generate it, you are \f[I]not\f[R] -affected by this issue. -See issue #4783 (https://github.com/rclone/rclone/issues/4783) for more -details, and a tool you can use to check if you are affected. -.SS Example -.PP -Create the following file structure using \[dq]standard\[dq] file name -encryption. -.IP -.nf -\f[C] -plaintext/ -\[u251C]\[u2500]\[u2500] file0.txt -\[u251C]\[u2500]\[u2500] file1.txt -\[u2514]\[u2500]\[u2500] subdir - \[u251C]\[u2500]\[u2500] file2.txt - \[u251C]\[u2500]\[u2500] file3.txt - \[u2514]\[u2500]\[u2500] subsubdir - \[u2514]\[u2500]\[u2500] file4.txt -\f[R] -.fi -.PP -Copy these to the remote, and list them -.IP -.nf -\f[C] -$ rclone -q copy plaintext secret: -$ rclone -q ls secret: - 7 file1.txt - 6 file0.txt - 8 subdir/file2.txt - 10 subdir/subsubdir/file4.txt - 9 subdir/file3.txt -\f[R] -.fi -.PP -The crypt remote looks like -.IP -.nf -\f[C] -$ rclone -q ls remote:path - 55 hagjclgavj2mbiqm6u6cnjjqcg - 54 v05749mltvv1tf4onltun46gls - 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo - 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc - 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps -\f[R] -.fi -.PP -The directory structure is preserved -.IP -.nf -\f[C] -$ rclone -q ls secret:subdir - 8 file2.txt - 9 file3.txt - 10 subsubdir/file4.txt -\f[R] -.fi -.PP -Without file name encryption \f[C].bin\f[R] extensions are added to -underlying names. -This prevents the cloud provider attempting to interpret file content. -.IP -.nf -\f[C] -$ rclone -q ls remote:path - 54 file0.txt.bin - 57 subdir/file3.txt.bin - 56 subdir/file2.txt.bin - 58 subdir/subsubdir/file4.txt.bin - 55 file1.txt.bin -\f[R] -.fi -.SS File name encryption modes -.PP -Off -.IP \[bu] 2 -doesn\[aq]t hide file names or directory structure -.IP \[bu] 2 -allows for longer file names (\[ti]246 characters) -.IP \[bu] 2 -can use sub paths and copy single files -.PP -Standard -.IP \[bu] 2 -file names encrypted -.IP \[bu] 2 -file names can\[aq]t be as long (\[ti]143 characters) -.IP \[bu] 2 -can use sub paths and copy single files -.IP \[bu] 2 -directory structure visible -.IP \[bu] 2 -identical files names will have identical uploaded names -.IP \[bu] 2 -can use shortcuts to shorten the directory recursion -.PP -Obfuscation -.PP -This is a simple \[dq]rotate\[dq] of the filename, with each file having -a rot distance based on the filename. -Rclone stores the distance at the beginning of the filename. -A file called \[dq]hello\[dq] may become \[dq]53.jgnnq\[dq]. -.PP -Obfuscation is not a strong encryption of filenames, but hinders -automated scanning tools picking up on filename patterns. -It is an intermediate between \[dq]off\[dq] and \[dq]standard\[dq] which -allows for longer path segment names. -.PP -There is a possibility with some unicode based filenames that the -obfuscation is weak and may map lower case characters to upper case -equivalents. -.PP -Obfuscation cannot be relied upon for strong protection. -.IP \[bu] 2 -file names very lightly obfuscated -.IP \[bu] 2 -file names can be longer than standard encryption -.IP \[bu] 2 -can use sub paths and copy single files -.IP \[bu] 2 -directory structure visible -.IP \[bu] 2 -identical files names will have identical uploaded names -.PP -Cloud storage systems have limits on file name length and total path -length which rclone is more likely to breach using \[dq]Standard\[dq] -file name encryption. -Where file names are less than 156 characters in length issues should -not be encountered, irrespective of cloud storage provider. -.PP -An experimental advanced option \f[C]filename_encoding\f[R] is now -provided to address this problem to a certain degree. -For cloud storage systems with case sensitive file names (e.g. -Google Drive), \f[C]base64\f[R] can be used to reduce file name length. -For cloud storage systems using UTF-16 to store file names internally -(e.g. -OneDrive, Dropbox), \f[C]base32768\f[R] can be used to drastically -reduce file name length. -.PP -An alternative, future rclone file name encryption mode may tolerate -backend provider path length limits. -.SS Directory name encryption -.PP -Crypt offers the option of encrypting dir names or leaving them intact. -There are two options: -.PP -True -.PP -Encrypts the whole file path including directory names Example: -\f[C]1/12/123.txt\f[R] is encrypted to -\f[C]p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0\f[R] -.PP -False -.PP -Only encrypts file names, skips directory names Example: -\f[C]1/12/123.txt\f[R] is encrypted to -\f[C]1/12/qgm4avr35m5loi1th53ato71v0\f[R] -.SS Modified time and hashes -.PP -Crypt stores modification times using the underlying remote so support -depends on that. -.PP -Hashes are not stored for crypt. -However the data integrity is protected by an extremely strong crypto -authenticator. -.PP -Use the \f[C]rclone cryptcheck\f[R] command to check the integrity of an -encrypted remote instead of \f[C]rclone check\f[R] which can\[aq]t check -the checksums properly. -.SS Standard options -.PP -Here are the Standard options specific to crypt (Encrypt/Decrypt a -remote). -.SS --crypt-remote +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> secret Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Encrypt/Decrypt a remote \ \[dq]crypt\[dq] [snip] Storage> crypt ** See +help for crypt backend at: https://rclone.org/crypt/ ** .PP Remote to encrypt/decrypt. -.PP -Normally should contain a \[aq]:\[aq] and a path, e.g. +Normally should contain a \[aq]:\[aq] and a path, eg \[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). -.PP -Properties: -.IP \[bu] 2 -Config: remote -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_REMOTE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --crypt-filename-encryption -.PP -How to encrypt the filenames. -.PP -Properties: -.IP \[bu] 2 -Config: filename_encryption -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]standard\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]standard\[dq] -.RS 2 -.IP \[bu] 2 -Encrypt the filenames. -.IP \[bu] 2 -See the docs for the details. -.RE -.IP \[bu] 2 -\[dq]obfuscate\[dq] -.RS 2 -.IP \[bu] 2 -Very simple filename obfuscation. -.RE -.IP \[bu] 2 -\[dq]off\[dq] -.RS 2 -.IP \[bu] 2 -Don\[aq]t encrypt the file names. -.IP \[bu] 2 -Adds a \[dq].bin\[dq], or \[dq]suffix\[dq] extension only. -.RE -.RE -.SS --crypt-directory-name-encryption -.PP -Option to either encrypt directory names or leave them intact. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +remote> remote:path How to encrypt the filenames. +Enter a string value. +Press Enter for the default (\[dq]standard\[dq]). +Choose a number from below, or type in your own value. +/ Encrypt the filenames. +1 | See the docs for the details. +\ \[dq]standard\[dq] 2 / Very simple filename obfuscation. +\ \[dq]obfuscate\[dq] / Don\[aq]t encrypt the file names. +3 | Adds a \[dq].bin\[dq] extension only. +\ \[dq]off\[dq] filename_encryption> Option to either encrypt directory +names or leave them intact. .PP NB If filename_encryption is \[dq]off\[dq] then this option will do nothing. -.PP -Properties: -.IP \[bu] 2 -Config: directory_name_encryption -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Encrypt directory names. -.RE -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Don\[aq]t encrypt directory names, leave them intact. -.RE -.RE -.SS --crypt-password -.PP -Password or pass phrase for encryption. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP -Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --crypt-password2 -.PP -Password or pass phrase for salt. -.PP +Enter a boolean value (true or false). +Press Enter for the default (\[dq]true\[dq]). +Choose a number from below, or type in your own value 1 / Encrypt +directory names. +\ \[dq]true\[dq] 2 / Don\[aq]t encrypt directory names, leave them +intact. +\ \[dq]false\[dq] directory_name_encryption> Password or pass phrase for +encryption. +y) Yes type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: Password or pass +phrase for salt. Optional but recommended. Should be different to the previous password. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank (default) y/g/n> g Password strength in +bits. +64 is just about memorable 128 is secure 1024 is the maximum Bits> 128 +Your password is: JAsJvRcgR-_veXNfy_sGmQ Use this password? +Please note that an obscured version of this password (and not the +password itself) will be stored under your configuration file, so keep +this generated password in a safe place. +y) Yes (default) n) No y/n> Edit advanced config? +(y/n) y) Yes n) No (default) y/n> Remote config -------------------- +[secret] type = crypt remote = remote:path password = *** ENCRYPTED +\f[B]\f[BI] password2 = \f[B]\f[R] ENCRYPTED *** -------------------- y) +Yes this is OK (default) e) Edit this remote d) Delete this remote +y/e/d> +.IP +.nf +\f[C] +**Important** The crypt password stored in \[ga]rclone.conf\[ga] is lightly +obscured. That only protects it from cursory inspection. It is not +secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of \[ga]rclone.conf\[ga] is specified. + +A long passphrase is recommended, or \[ga]rclone config\[ga] can generate a +random one. + +The obscured password is created using AES-CTR with a static key. The +salt is stored verbatim at the beginning of the obscured password. This +static key is shared between all versions of rclone. + +If you reconfigure rclone with the same passwords/passphrases +elsewhere it will be compatible, but the obscured version will be different +due to the different salt. + +Rclone does not encrypt + + * file length - this can be calculated within 16 bytes + * modification time - used for syncing + +### Specifying the remote + +When configuring the remote to encrypt/decrypt, you may specify any +string that rclone accepts as a source/destination of other commands. + +The primary use case is to specify the path into an already configured +remote (e.g. \[ga]remote:path/to/dir\[ga] or \[ga]remote:bucket\[ga]), such that +data in a remote untrusted location can be stored encrypted. + +You may also specify a local filesystem path, such as +\[ga]/path/to/dir\[ga] on Linux, \[ga]C:\[rs]path\[rs]to\[rs]dir\[ga] on Windows. By creating +a crypt remote pointing to such a local filesystem path, you can +use rclone as a utility for pure local file encryption, for example +to keep encrypted files on a removable USB drive. + +**Note**: A string which do not contain a \[ga]:\[ga] will by rclone be treated +as a relative path in the local filesystem. For example, if you enter +the name \[ga]remote\[ga] without the trailing \[ga]:\[ga], it will be treated as +a subdirectory of the current directory with name \[dq]remote\[dq]. + +If a path \[ga]remote:path/to/dir\[ga] is specified, rclone stores encrypted +files in \[ga]path/to/dir\[ga] on the remote. With file name encryption, files +saved to \[ga]secret:subdir/subfile\[ga] are stored in the unencrypted path +\[ga]path/to/dir\[ga] but the \[ga]subdir/subpath\[ga] element is encrypted. + +The path you specify does not have to exist, rclone will create +it when needed. + +If you intend to use the wrapped remote both directly for keeping +unencrypted content, as well as through a crypt remote for encrypted +content, it is recommended to point the crypt remote to a separate +directory within the wrapped remote. If you use a bucket-based storage +system (e.g. Swift, S3, Google Compute Storage, B2) it is generally +advisable to wrap the crypt remote around a specific bucket (\[ga]s3:bucket\[ga]). +If wrapping around the entire root of the storage (\[ga]s3:\[ga]), and use the +optional file name encryption, rclone will encrypt the bucket name. + +### Changing password + +Should the password, or the configuration file containing a lightly obscured +form of the password, be compromised, you need to re-encrypt your data with +a new password. Since rclone uses secret-key encryption, where the encryption +key is generated directly from the password kept on the client, it is not +possible to change the password/key of already encrypted content. Just changing +the password configured for an existing crypt remote means you will no longer +able to decrypt any of the previously encrypted content. The only possibility +is to re-upload everything via a crypt remote configured with your new password. + +Depending on the size of your data, your bandwidth, storage quota etc, there are +different approaches you can take: +- If you have everything in a different location, for example on your local system, +you could remove all of the prior encrypted files, change the password for your +configured crypt remote (or delete and re-create the crypt configuration), +and then re-upload everything from the alternative location. +- If you have enough space on the storage system you can create a new crypt +remote pointing to a separate directory on the same backend, and then use +rclone to copy everything from the original crypt remote to the new, +effectively decrypting everything on the fly using the old password and +re-encrypting using the new password. When done, delete the original crypt +remote directory and finally the rclone crypt configuration with the old password. +All data will be streamed from the storage system and back, so you will +get half the bandwidth and be charged twice if you have upload and download quota +on the storage system. + +**Note**: A security problem related to the random password generator +was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated +by rclone config in version 1.49.0 (released 2019-08-26) to 1.53.2 +(released 2020-10-26) are not considered secure and should be changed. +If you made up your own password, or used rclone version older than 1.49.0 or +newer than 1.53.2 to generate it, you are *not* affected by this issue. +See [issue #4783](https://github.com/rclone/rclone/issues/4783) for more +details, and a tool you can use to check if you are affected. + +### Example + +Create the following file structure using \[dq]standard\[dq] file name +encryption. +\f[R] +.fi .PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). +plaintext/ \[u251C]\[u2500]\[u2500] file0.txt \[u251C]\[u2500]\[u2500] +file1.txt \[u2514]\[u2500]\[u2500] subdir \[u251C]\[u2500]\[u2500] +file2.txt \[u251C]\[u2500]\[u2500] file3.txt \[u2514]\[u2500]\[u2500] +subsubdir \[u2514]\[u2500]\[u2500] file4.txt +.IP +.nf +\f[C] +Copy these to the remote, and list them +\f[R] +.fi .PP +$ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt 6 +file0.txt 8 subdir/file2.txt 10 subdir/subsubdir/file4.txt 9 +subdir/file3.txt +.IP +.nf +\f[C] +The crypt remote looks like +\f[R] +.fi +.PP +$ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 +v05749mltvv1tf4onltun46gls 57 +86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo 58 +86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc +56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps +.IP +.nf +\f[C] +The directory structure is preserved +\f[R] +.fi +.PP +$ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt 10 +subsubdir/file4.txt +.IP +.nf +\f[C] +Without file name encryption \[ga].bin\[ga] extensions are added to underlying +names. This prevents the cloud provider attempting to interpret file +content. +\f[R] +.fi +.PP +$ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin 56 +subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin +.IP +.nf +\f[C] +### File name encryption modes + +Off + + * doesn\[aq]t hide file names or directory structure + * allows for longer file names (\[ti]246 characters) + * can use sub paths and copy single files + +Standard + + * file names encrypted + * file names can\[aq]t be as long (\[ti]143 characters) + * can use sub paths and copy single files + * directory structure visible + * identical files names will have identical uploaded names + * can use shortcuts to shorten the directory recursion + +Obfuscation + +This is a simple \[dq]rotate\[dq] of the filename, with each file having a rot +distance based on the filename. Rclone stores the distance at the +beginning of the filename. A file called \[dq]hello\[dq] may become \[dq]53.jgnnq\[dq]. + +Obfuscation is not a strong encryption of filenames, but hinders +automated scanning tools picking up on filename patterns. It is an +intermediate between \[dq]off\[dq] and \[dq]standard\[dq] which allows for longer path +segment names. + +There is a possibility with some unicode based filenames that the +obfuscation is weak and may map lower case characters to upper case +equivalents. + +Obfuscation cannot be relied upon for strong protection. + + * file names very lightly obfuscated + * file names can be longer than standard encryption + * can use sub paths and copy single files + * directory structure visible + * identical files names will have identical uploaded names + +Cloud storage systems have limits on file name length and +total path length which rclone is more likely to breach using +\[dq]Standard\[dq] file name encryption. Where file names are less than 156 +characters in length issues should not be encountered, irrespective of +cloud storage provider. + +An experimental advanced option \[ga]filename_encoding\[ga] is now provided to +address this problem to a certain degree. +For cloud storage systems with case sensitive file names (e.g. Google Drive), +\[ga]base64\[ga] can be used to reduce file name length. +For cloud storage systems using UTF-16 to store file names internally +(e.g. OneDrive, Dropbox, Box), \[ga]base32768\[ga] can be used to drastically reduce +file name length. + +An alternative, future rclone file name encryption mode may tolerate +backend provider path length limits. + +### Directory name encryption + +Crypt offers the option of encrypting dir names or leaving them intact. +There are two options: + +True + +Encrypts the whole file path including directory names +Example: +\[ga]1/12/123.txt\[ga] is encrypted to +\[ga]p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0\[ga] + +False + +Only encrypts file names, skips directory names +Example: +\[ga]1/12/123.txt\[ga] is encrypted to +\[ga]1/12/qgm4avr35m5loi1th53ato71v0\[ga] + + +### Modified time and hashes + +Crypt stores modification times using the underlying remote so support +depends on that. + +Hashes are not stored for crypt. However the data integrity is +protected by an extremely strong crypto authenticator. + +Use the \[ga]rclone cryptcheck\[ga] command to check the +integrity of an encrypted remote instead of \[ga]rclone check\[ga] which can\[aq]t +check the checksums properly. + + +### Standard options + +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). + +#### --crypt-remote + +Remote to encrypt/decrypt. + +Normally should contain a \[aq]:\[aq] and a path, e.g. \[dq]myremote:path/to/dir\[dq], +\[dq]myremote:bucket\[dq] or maybe \[dq]myremote:\[dq] (not recommended). + Properties: -.IP \[bu] 2 -Config: password2 -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_PASSWORD2 -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP -Here are the Advanced options specific to crypt (Encrypt/Decrypt a -remote). -.SS --crypt-server-side-across-configs -.PP + +- Config: remote +- Env Var: RCLONE_CRYPT_REMOTE +- Type: string +- Required: true + +#### --crypt-filename-encryption + +How to encrypt the filenames. + +Properties: + +- Config: filename_encryption +- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION +- Type: string +- Default: \[dq]standard\[dq] +- Examples: + - \[dq]standard\[dq] + - Encrypt the filenames. + - See the docs for the details. + - \[dq]obfuscate\[dq] + - Very simple filename obfuscation. + - \[dq]off\[dq] + - Don\[aq]t encrypt the file names. + - Adds a \[dq].bin\[dq], or \[dq]suffix\[dq] extension only. + +#### --crypt-directory-name-encryption + +Option to either encrypt directory names or leave them intact. + +NB If filename_encryption is \[dq]off\[dq] then this option will do nothing. + +Properties: + +- Config: directory_name_encryption +- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION +- Type: bool +- Default: true +- Examples: + - \[dq]true\[dq] + - Encrypt directory names. + - \[dq]false\[dq] + - Don\[aq]t encrypt directory names, leave them intact. + +#### --crypt-password + +Password or pass phrase for encryption. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_CRYPT_PASSWORD +- Type: string +- Required: true + +#### --crypt-password2 + +Password or pass phrase for salt. + +Optional but recommended. +Should be different to the previous password. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password2 +- Env Var: RCLONE_CRYPT_PASSWORD2 +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). + +#### --crypt-server-side-across-configs + Deprecated: use --server-side-across-configs instead. -.PP -Allow server-side operations (e.g. -copy) to work across different crypt configs. -.PP + +Allow server-side operations (e.g. copy) to work across different crypt configs. + Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. -.PP + This can be used, for example, to change file name encryption type -without re-uploading all the data. -Just make two crypt backends pointing to two different directories with -the single changed parameter and use rclone move to move the files -between the crypt remotes. -.PP +without re-uploading all the data. Just make two crypt backends +pointing to two different directories with the single changed +parameter and use rclone move to move the files between the crypt +remotes. + Properties: -.IP \[bu] 2 -Config: server_side_across_configs -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --crypt-show-mapping -.PP + +- Config: server_side_across_configs +- Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS +- Type: bool +- Default: false + +#### --crypt-show-mapping + For all files listed show how the names encrypt. -.PP -If this flag is set then for each file that the remote is asked to list, -it will log (at level INFO) a line stating the decrypted file name and -the encrypted file name. -.PP + +If this flag is set then for each file that the remote is asked to +list, it will log (at level INFO) a line stating the decrypted file +name and the encrypted file name. + This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes. -.PP + Properties: -.IP \[bu] 2 -Config: show_mapping -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_SHOW_MAPPING -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --crypt-no-data-encryption -.PP + +- Config: show_mapping +- Env Var: RCLONE_CRYPT_SHOW_MAPPING +- Type: bool +- Default: false + +#### --crypt-no-data-encryption + Option to either encrypt file data or leave it unencrypted. -.PP + Properties: -.IP \[bu] 2 -Config: no_data_encryption -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Don\[aq]t encrypt file data, leave it unencrypted. -.RE -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Encrypt file data. -.RE -.RE -.SS --crypt-pass-bad-blocks -.PP + +- Config: no_data_encryption +- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION +- Type: bool +- Default: false +- Examples: + - \[dq]true\[dq] + - Don\[aq]t encrypt file data, leave it unencrypted. + - \[dq]false\[dq] + - Encrypt file data. + +#### --crypt-pass-bad-blocks + If set this will pass bad blocks through as all 0. -.PP + This should not be set in normal operation, it should only be set if trying to recover an encrypted file with errors and it is desired to recover as much of the file as possible. -.PP + Properties: -.IP \[bu] 2 -Config: pass_bad_blocks -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_PASS_BAD_BLOCKS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --crypt-filename-encoding -.PP + +- Config: pass_bad_blocks +- Env Var: RCLONE_CRYPT_PASS_BAD_BLOCKS +- Type: bool +- Default: false + +#### --crypt-filename-encoding + How to encode the encrypted filename to text string. -.PP -This option could help with shortening the encrypted filename. -The suitable option would depend on the way your remote count the -filename length and if it\[aq]s case sensitive. -.PP + +This option could help with shortening the encrypted filename. The +suitable option would depend on the way your remote count the filename +length and if it\[aq]s case sensitive. + Properties: -.IP \[bu] 2 -Config: filename_encoding -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_FILENAME_ENCODING -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]base32\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]base32\[dq] -.RS 2 -.IP \[bu] 2 -Encode using base32. -Suitable for all remote. -.RE -.IP \[bu] 2 -\[dq]base64\[dq] -.RS 2 -.IP \[bu] 2 -Encode using base64. -Suitable for case sensitive remote. -.RE -.IP \[bu] 2 -\[dq]base32768\[dq] -.RS 2 -.IP \[bu] 2 -Encode using base32768. -Suitable if your remote counts UTF-16 or -.IP \[bu] 2 -Unicode codepoint instead of UTF-8 byte length. -(Eg. -Onedrive, Dropbox) -.RE -.RE -.SS --crypt-suffix -.PP + +- Config: filename_encoding +- Env Var: RCLONE_CRYPT_FILENAME_ENCODING +- Type: string +- Default: \[dq]base32\[dq] +- Examples: + - \[dq]base32\[dq] + - Encode using base32. Suitable for all remote. + - \[dq]base64\[dq] + - Encode using base64. Suitable for case sensitive remote. + - \[dq]base32768\[dq] + - Encode using base32768. Suitable if your remote counts UTF-16 or + - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) + +#### --crypt-suffix + If this is set it will override the default suffix of \[dq].bin\[dq]. -.PP -Setting suffix to \[dq]none\[dq] will result in an empty suffix. -This may be useful when the path length is critical. -.PP + +Setting suffix to \[dq]none\[dq] will result in an empty suffix. This may be useful +when the path length is critical. + Properties: -.IP \[bu] 2 -Config: suffix -.IP \[bu] 2 -Env Var: RCLONE_CRYPT_SUFFIX -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq].bin\[dq] -.SS Metadata -.PP + +- Config: suffix +- Env Var: RCLONE_CRYPT_SUFFIX +- Type: string +- Default: \[dq].bin\[dq] + +### Metadata + Any metadata supported by the underlying remote is read and written. -.PP -See the metadata (https://rclone.org/docs/#metadata) docs for more info. -.SS Backend commands -.PP + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + +## Backend commands + Here are the commands specific to the crypt backend. -.PP + Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP + + rclone backend COMMAND remote: + The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS encode -.PP +[backend/command](https://rclone.org/rc/#backend-command). + +### encode + Encode the given filename(s) -.IP -.nf -\f[C] -rclone backend encode remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend encode remote: [options] [+] + This encodes the filenames given as arguments returning a list of strings of the encoded results. -.PP + Usage Example: -.IP -.nf -\f[C] -rclone backend encode crypt: file1 [file2...] -rclone rc backend/command command=encode fs=crypt: file1 [file2...] -\f[R] -.fi -.SS decode -.PP + + rclone backend encode crypt: file1 [file2...] + rclone rc backend/command command=encode fs=crypt: file1 [file2...] + + +### decode + Decode the given filename(s) -.IP -.nf -\f[C] -rclone backend decode remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend decode remote: [options] [+] + This decodes the filenames given as arguments returning a list of -strings of the decoded results. -It will return an error if any of the inputs are invalid. -.PP +strings of the decoded results. It will return an error if any of the +inputs are invalid. + Usage Example: -.IP -.nf -\f[C] -rclone backend decode crypt: encryptedfile1 [encryptedfile2...] -rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] -\f[R] -.fi -.SS Backing up an encrypted remote -.PP -If you wish to backup an encrypted remote, it is recommended that you -use \f[C]rclone sync\f[R] on the encrypted files, and make sure the -passwords are the same in the new encrypted remote. -.PP + + rclone backend decode crypt: encryptedfile1 [encryptedfile2...] + rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] + + + + +## Backing up an encrypted remote + +If you wish to backup an encrypted remote, it is recommended that you use +\[ga]rclone sync\[ga] on the encrypted files, and make sure the passwords are +the same in the new encrypted remote. + This will have the following advantages -.IP \[bu] 2 -\f[C]rclone sync\f[R] will check the checksums while copying -.IP \[bu] 2 -you can use \f[C]rclone check\f[R] between the encrypted remotes -.IP \[bu] 2 -you don\[aq]t decrypt and encrypt unnecessarily -.PP -For example, let\[aq]s say you have your original remote at -\f[C]remote:\f[R] with the encrypted version at \f[C]eremote:\f[R] with -path \f[C]remote:crypt\f[R]. -You would then set up the new remote \f[C]remote2:\f[R] and then the -encrypted version \f[C]eremote2:\f[R] with path \f[C]remote2:crypt\f[R] -using the same passwords as \f[C]eremote:\f[R]. -.PP + + * \[ga]rclone sync\[ga] will check the checksums while copying + * you can use \[ga]rclone check\[ga] between the encrypted remotes + * you don\[aq]t decrypt and encrypt unnecessarily + +For example, let\[aq]s say you have your original remote at \[ga]remote:\[ga] with +the encrypted version at \[ga]eremote:\[ga] with path \[ga]remote:crypt\[ga]. You +would then set up the new remote \[ga]remote2:\[ga] and then the encrypted +version \[ga]eremote2:\[ga] with path \[ga]remote2:crypt\[ga] using the same passwords +as \[ga]eremote:\[ga]. + To sync the two remotes you would do -.IP -.nf -\f[C] -rclone sync --interactive remote:crypt remote2:crypt -\f[R] -.fi -.PP + + rclone sync --interactive remote:crypt remote2:crypt + And to check the integrity you would do -.IP -.nf -\f[C] -rclone check remote:crypt remote2:crypt -\f[R] -.fi -.SS File formats -.SS File encryption -.PP -Files are encrypted 1:1 source file to destination object. -The file has a header and is divided into chunks. -.SS Header -.IP \[bu] 2 -8 bytes magic string \f[C]RCLONE\[rs]x00\[rs]x00\f[R] -.IP \[bu] 2 -24 bytes Nonce (IV) -.PP -The initial nonce is generated from the operating systems crypto strong -random number generator. -The nonce is incremented for each chunk read making sure each nonce is -unique for each block written. -The chance of a nonce being re-used is minuscule. -If you wrote an exabyte of data (10\[S1]\[u2078] bytes) you would have a -probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a -nonce. -.SS Chunk -.PP + + rclone check remote:crypt remote2:crypt + +## File formats + +### File encryption + +Files are encrypted 1:1 source file to destination object. The file +has a header and is divided into chunks. + +#### Header + + * 8 bytes magic string \[ga]RCLONE\[rs]x00\[rs]x00\[ga] + * 24 bytes Nonce (IV) + +The initial nonce is generated from the operating systems crypto +strong random number generator. The nonce is incremented for each +chunk read making sure each nonce is unique for each block written. +The chance of a nonce being re-used is minuscule. If you wrote an +exabyte of data (10\[S1]\[u2078] bytes) you would have a probability of +approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a nonce. + +#### Chunk + Each chunk will contain 64 KiB of data, except for the last one which -may have less data. -The data chunk is in standard NaCl SecretBox format. -SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate -messages. -.PP +may have less data. The data chunk is in standard NaCl SecretBox +format. SecretBox uses XSalsa20 and Poly1305 to encrypt and +authenticate messages. + Each chunk contains: -.IP \[bu] 2 -16 Bytes of Poly1305 authenticator -.IP \[bu] 2 -1 - 65536 bytes XSalsa20 encrypted data -.PP + + * 16 Bytes of Poly1305 authenticator + * 1 - 65536 bytes XSalsa20 encrypted data + 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops -off due to cache effects above this). -Note that these chunks are buffered in memory so they can\[aq]t be too -big. -.PP +off due to cache effects above this). Note that these chunks are +buffered in memory so they can\[aq]t be too big. + This uses a 32 byte (256 bit key) key derived from the user password. -.SS Examples -.PP + +#### Examples + 1 byte file will encrypt to -.IP \[bu] 2 -32 bytes header -.IP \[bu] 2 -17 bytes data chunk -.PP + + * 32 bytes header + * 17 bytes data chunk + 49 bytes total -.PP + 1 MiB (1048576 bytes) file will encrypt to -.IP \[bu] 2 -32 bytes header -.IP \[bu] 2 -16 chunks of 65568 bytes -.PP -1049120 bytes total (a 0.05% overhead). -This is the overhead for big files. -.SS Name encryption -.PP -File names are encrypted segment by segment - the path is broken up into -\f[C]/\f[R] separated strings and these are encrypted individually. -.PP -File segments are padded using PKCS#7 to a multiple of 16 bytes before -encryption. -.PP -They are then encrypted with EME using AES with 256 bit key. -EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 + + * 32 bytes header + * 16 chunks of 65568 bytes + +1049120 bytes total (a 0.05% overhead). This is the overhead for big +files. + +### Name encryption + +File names are encrypted segment by segment - the path is broken up +into \[ga]/\[ga] separated strings and these are encrypted individually. + +File segments are padded using PKCS#7 to a multiple of 16 bytes +before encryption. + +They are then encrypted with EME using AES with 256 bit key. EME +(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper \[dq]A Parallelizable Enciphering Mode\[dq] by Halevi and Rogaway. -.PP -This makes for deterministic encryption which is what we want - the same -filename must encrypt to the same thing otherwise we can\[aq]t find it -on the cloud storage system. -.PP + +This makes for deterministic encryption which is what we want - the +same filename must encrypt to the same thing otherwise we can\[aq]t find +it on the cloud storage system. + This means that -.IP \[bu] 2 -filenames with the same name will encrypt the same -.IP \[bu] 2 -filenames which start the same won\[aq]t have a common prefix -.PP + + * filenames with the same name will encrypt the same + * filenames which start the same won\[aq]t have a common prefix + This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. -.PP + After encryption they are written out using a modified version of -standard \f[C]base32\f[R] encoding as described in RFC4648. -The standard encoding is modified in two ways: -.IP \[bu] 2 -it becomes lower case (no-one likes upper case filenames!) -.IP \[bu] 2 -we strip the padding character \f[C]=\f[R] +standard \[ga]base32\[ga] encoding as described in RFC4648. The standard +encoding is modified in two ways: + + * it becomes lower case (no-one likes upper case filenames!) + * we strip the padding character \[ga]=\[ga] + +\[ga]base32\[ga] is used rather than the more efficient \[ga]base64\[ga] so rclone can be +used on case insensitive remotes (e.g. Windows, Amazon Drive). + +### Key derivation + +Rclone uses \[ga]scrypt\[ga] with parameters \[ga]N=16384, r=8, p=1\[ga] with an +optional user supplied salt (password2) to derive the 32+32+16 = 80 +bytes of key material required. If the user doesn\[aq]t supply a salt +then rclone uses an internal one. + +\[ga]scrypt\[ga] makes it impractical to mount a dictionary attack on rclone +encrypted data. For full protection against this you should always use +a salt. + +## SEE ALSO + +* [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/) - Show forward/reverse mapping of encrypted filenames + +# Compress + +## Warning + +This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is +at your own risk. Please understand the risks associated with using experimental code and don\[aq]t use this remote in +critical applications. + +The \[ga]Compress\[ga] remote adds compression to another remote. It is best used with remotes containing +many large compressible files. + +## Configuration + +To use this remote, all you need to do is specify another remote and a compression mode to use: +\f[R] +.fi .PP -\f[C]base32\f[R] is used rather than the more efficient \f[C]base64\f[R] -so rclone can be used on case insensitive remotes (e.g. -Windows, Amazon Drive). -.SS Key derivation +Current remotes: .PP -Rclone uses \f[C]scrypt\f[R] with parameters \f[C]N=16384, r=8, p=1\f[R] -with an optional user supplied salt (password2) to derive the 32+32+16 = -80 bytes of key material required. -If the user doesn\[aq]t supply a salt then rclone uses an internal one. +Name Type ==== ==== remote_to_press sometype +.IP "e)" 3 +Edit existing remote $ rclone config +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> n name> compress ... +8 / Compress a remote \ \[dq]compress\[dq] ... +Storage> compress ** See help for compress backend at: +https://rclone.org/compress/ ** .PP -\f[C]scrypt\f[R] makes it impractical to mount a dictionary attack on -rclone encrypted data. -For full protection against this you should always use a salt. -.SS SEE ALSO -.IP \[bu] 2 -rclone cryptdecode (https://rclone.org/commands/rclone_cryptdecode/) - -Show forward/reverse mapping of encrypted filenames -.SH Compress -.SS Warning -.PP -This remote is currently \f[B]experimental\f[R]. -Things may break and data may be lost. -Anything you do with this remote is at your own risk. -Please understand the risks associated with using experimental code and -don\[aq]t use this remote in critical applications. -.PP -The \f[C]Compress\f[R] remote adds compression to another remote. -It is best used with remotes containing many large compressible files. -.SS Configuration -.PP -To use this remote, all you need to do is specify another remote and a -compression mode to use: +Remote to compress. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +remote> remote_to_press:subdir Compression mode. +Enter a string value. +Press Enter for the default (\[dq]gzip\[dq]). +Choose a number from below, or type in your own value 1 / Gzip +compression balanced for speed and compression strength. +\ \[dq]gzip\[dq] compression_mode> gzip Edit advanced config? +(y/n) y) Yes n) No (default) y/n> n Remote config -------------------- +[compress] type = compress remote = remote_to_press:subdir +compression_mode = gzip -------------------- y) Yes this is OK (default) +e) Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -Current remotes: +### Compression Modes -Name Type -==== ==== -remote_to_press sometype +Currently only gzip compression is supported. It provides a decent balance between speed and size and is well +supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no +compression and 9 is strongest compression. -e) Edit existing remote -$ rclone config -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> n -name> compress -\&... - 8 / Compress a remote - \[rs] \[dq]compress\[dq] -\&... -Storage> compress -** See help for compress backend at: https://rclone.org/compress/ ** +### File types + +If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to +the compression algorithm you chose. These files are standard files that can be opened by various archive programs, +but they have some hidden metadata that allows them to be used by rclone. +While you may download and decompress these files at will, do **not** manually delete or rename files. Files without +correct metadata files will not be recognized by rclone. + +### File names + +The compressed files will be named \[ga]*.###########.gz\[ga] where \[ga]*\[ga] is the base file and the \[ga]#\[ga] part is base64 encoded +size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend. + + +### Standard options -Remote to compress. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -remote> remote_to_press:subdir -Compression mode. -Enter a string value. Press Enter for the default (\[dq]gzip\[dq]). -Choose a number from below, or type in your own value - 1 / Gzip compression balanced for speed and compression strength. - \[rs] \[dq]gzip\[dq] -compression_mode> gzip -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[compress] -type = compress -remote = remote_to_press:subdir -compression_mode = gzip --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS Compression Modes -.PP -Currently only gzip compression is supported. -It provides a decent balance between speed and size and is well -supported by other applications. -Compression strength can further be configured via an advanced setting -where 0 is no compression and 9 is strongest compression. -.SS File types -.PP -If you open a remote wrapped by compress, you will see that there are -many files with an extension corresponding to the compression algorithm -you chose. -These files are standard files that can be opened by various archive -programs, but they have some hidden metadata that allows them to be used -by rclone. -While you may download and decompress these files at will, do -\f[B]not\f[R] manually delete or rename files. -Files without correct metadata files will not be recognized by rclone. -.SS File names -.PP -The compressed files will be named \f[C]*.###########.gz\f[R] where -\f[C]*\f[R] is the base file and the \f[C]#\f[R] part is base64 encoded -size of the uncompressed file. -The file names should not be changed by anything other than the rclone -compression backend. -.SS Standard options -.PP Here are the Standard options specific to compress (Compress a remote). -.SS --compress-remote -.PP + +#### --compress-remote + Remote to compress. -.PP + Properties: -.IP \[bu] 2 -Config: remote -.IP \[bu] 2 -Env Var: RCLONE_COMPRESS_REMOTE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --compress-mode -.PP + +- Config: remote +- Env Var: RCLONE_COMPRESS_REMOTE +- Type: string +- Required: true + +#### --compress-mode + Compression mode. -.PP + Properties: -.IP \[bu] 2 -Config: mode -.IP \[bu] 2 -Env Var: RCLONE_COMPRESS_MODE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]gzip\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]gzip\[dq] -.RS 2 -.IP \[bu] 2 -Standard gzip compression with fastest parameters. -.RE -.RE -.SS Advanced options -.PP + +- Config: mode +- Env Var: RCLONE_COMPRESS_MODE +- Type: string +- Default: \[dq]gzip\[dq] +- Examples: + - \[dq]gzip\[dq] + - Standard gzip compression with fastest parameters. + +### Advanced options + Here are the Advanced options specific to compress (Compress a remote). -.SS --compress-level -.PP + +#### --compress-level + GZIP compression level (-2 to 9). -.PP + Generally -1 (default, equivalent to 5) is recommended. -Levels 1 to 9 increase compression at the cost of speed. -Going past 6 generally offers very little return. -.PP -Level -2 uses Huffman encoding only. -Only use if you know what you are doing. +Levels 1 to 9 increase compression at the cost of speed. Going past 6 +generally offers very little return. + +Level -2 uses Huffman encoding only. Only use if you know what you +are doing. Level 0 turns off compression. -.PP + Properties: -.IP \[bu] 2 -Config: level -.IP \[bu] 2 -Env Var: RCLONE_COMPRESS_LEVEL -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: -1 -.SS --compress-ram-cache-limit -.PP + +- Config: level +- Env Var: RCLONE_COMPRESS_LEVEL +- Type: int +- Default: -1 + +#### --compress-ram-cache-limit + Some remotes don\[aq]t allow the upload of files with unknown size. In this case the compressed file will need to be cached to determine it\[aq]s size. -.PP -Files smaller than this limit will be cached in RAM, files larger than + +Files smaller than this limit will be cached in RAM, files larger than this limit will be cached on disk. -.PP + Properties: -.IP \[bu] 2 -Config: ram_cache_limit -.IP \[bu] 2 -Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 20Mi -.SS Metadata -.PP + +- Config: ram_cache_limit +- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT +- Type: SizeSuffix +- Default: 20Mi + +### Metadata + Any metadata supported by the underlying remote is read and written. -.PP -See the metadata (https://rclone.org/docs/#metadata) docs for more info. -.SH Combine -.PP -The \f[C]combine\f[R] backend joins remotes together into a single -directory tree. -.PP + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + +# Combine + +The \[ga]combine\[ga] backend joins remotes together into a single directory +tree. + For example you might have a remote for images on one provider: -.IP -.nf -\f[C] -$ rclone tree s3:imagesbucket -/ -\[u251C]\[u2500]\[u2500] image1.jpg +\f[R] +.fi +.PP +$ rclone tree s3:imagesbucket / \[u251C]\[u2500]\[u2500] image1.jpg \[u2514]\[u2500]\[u2500] image2.jpg -\f[R] -.fi -.PP +.IP +.nf +\f[C] And a remote for files on another: -.IP -.nf -\f[C] -$ rclone tree drive:important/files -/ -\[u251C]\[u2500]\[u2500] file1.txt +\f[R] +.fi +.PP +$ rclone tree drive:important/files / \[u251C]\[u2500]\[u2500] file1.txt \[u2514]\[u2500]\[u2500] file2.txt -\f[R] -.fi -.PP -The \f[C]combine\f[R] backend can join these together into a synthetic +.IP +.nf +\f[C] +The \[ga]combine\[ga] backend can join these together into a synthetic directory structure like this: -.IP -.nf -\f[C] -$ rclone tree combined: -/ -\[u251C]\[u2500]\[u2500] files -\[br] \[u251C]\[u2500]\[u2500] file1.txt -\[br] \[u2514]\[u2500]\[u2500] file2.txt -\[u2514]\[u2500]\[u2500] images - \[u251C]\[u2500]\[u2500] image1.jpg - \[u2514]\[u2500]\[u2500] image2.jpg \f[R] .fi .PP -You\[aq]d do this by specifying an \f[C]upstreams\f[R] parameter in the -config like this +$ rclone tree combined: / \[u251C]\[u2500]\[u2500] files \[br] +\[u251C]\[u2500]\[u2500] file1.txt \[br] \[u2514]\[u2500]\[u2500] +file2.txt \[u2514]\[u2500]\[u2500] images \[u251C]\[u2500]\[u2500] +image1.jpg \[u2514]\[u2500]\[u2500] image2.jpg .IP .nf \f[C] -upstreams = images=s3:imagesbucket files=drive:important/files -\f[R] -.fi -.PP -During the initial setup with \f[C]rclone config\f[R] you will specify -the upstreams remotes as a space separated list. -The upstream remotes can either be a local paths or other remotes. -.SS Configuration -.PP -Here is an example of how to make a combine called \f[C]remote\f[R] for -the example above. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +You\[aq]d do this by specifying an \[ga]upstreams\[ga] parameter in the config +like this + + upstreams = images=s3:imagesbucket files=drive:important/files + +During the initial setup with \[ga]rclone config\[ga] you will specify the +upstreams remotes as a space separated list. The upstream remotes can +either be a local paths or other remotes. + +## Configuration + +Here is an example of how to make a combine called \[ga]remote\[ga] for the +example above. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. \&... -XX / Combine several remotes into one - \[rs] (combine) -\&... -Storage> combine -Option upstreams. -Upstreams for combining -These should be in the form - dir=remote:path dir2=remote2:path -Where before the = is specified the root directory and after is the remote to -put there. -Embedded spaces can be added using quotes - \[dq]dir=remote:path with space\[dq] \[dq]dir2=remote2:path with space\[dq] -Enter a fs.SpaceSepList value. +XX / Combine several remotes into one \ (combine) ... +Storage> combine Option upstreams. +Upstreams for combining These should be in the form dir=remote:path +dir2=remote2:path Where before the = is specified the root directory and +after is the remote to put there. +Embedded spaces can be added using quotes \[dq]dir=remote:path with +space\[dq] \[dq]dir2=remote2:path with space\[dq] Enter a +fs.SpaceSepList value. upstreams> images=s3:imagesbucket files=drive:important/files --------------------- -[remote] -type = combine -upstreams = images=s3:imagesbucket files=drive:important/files --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote +-------------------- [remote] type = combine upstreams = +images=s3:imagesbucket files=drive:important/files -------------------- +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -\f[R] -.fi -.SS Configuring for Google Drive Shared Drives -.PP +.IP +.nf +\f[C] +### Configuring for Google Drive Shared Drives + Rclone has a convenience feature for making a combine backend for all the shared drives you have access to. -.PP + Assuming your main (non shared drive) Google drive remote is called -\f[C]drive:\f[R] you would run -.IP -.nf -\f[C] -rclone backend -o config drives drive: -\f[R] -.fi -.PP +\[ga]drive:\[ga] you would run + + rclone backend -o config drives drive: + This would produce something like this: -.IP -.nf -\f[C] -[My Drive] -type = alias -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: -[Test Drive] -type = alias -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] + +If you then add that config to your config file (find it with \[ga]rclone +config file\[ga]) then you can access all the shared drives in one place +with the \[ga]AllDrives:\[ga] remote. + +See [the Google Drive docs](https://rclone.org/drive/#drives) for full info. + + +### Standard options + +Here are the Standard options specific to combine (Combine several remotes into one). + +#### --combine-upstreams -[AllDrives] -type = combine -upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] -\f[R] -.fi -.PP -If you then add that config to your config file (find it with -\f[C]rclone config file\f[R]) then you can access all the shared drives -in one place with the \f[C]AllDrives:\f[R] remote. -.PP -See the Google Drive docs (https://rclone.org/drive/#drives) for full -info. -.SS Standard options -.PP -Here are the Standard options specific to combine (Combine several -remotes into one). -.SS --combine-upstreams -.PP Upstreams for combining -.PP + These should be in the form -.IP -.nf -\f[C] -dir=remote:path dir2=remote2:path -\f[R] -.fi -.PP -Where before the = is specified the root directory and after is the -remote to put there. -.PP + + dir=remote:path dir2=remote2:path + +Where before the = is specified the root directory and after is the remote to +put there. + Embedded spaces can be added using quotes -.IP -.nf -\f[C] -\[dq]dir=remote:path with space\[dq] \[dq]dir2=remote2:path with space\[dq] -\f[R] -.fi -.PP + + \[dq]dir=remote:path with space\[dq] \[dq]dir2=remote2:path with space\[dq] + + + Properties: -.IP \[bu] 2 -Config: upstreams -.IP \[bu] 2 -Env Var: RCLONE_COMBINE_UPSTREAMS -.IP \[bu] 2 -Type: SpaceSepList -.IP \[bu] 2 -Default: -.SS Metadata -.PP + +- Config: upstreams +- Env Var: RCLONE_COMBINE_UPSTREAMS +- Type: SpaceSepList +- Default: + +### Metadata + Any metadata supported by the underlying remote is read and written. -.PP -See the metadata (https://rclone.org/docs/#metadata) docs for more info. -.SH Dropbox -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + +# Dropbox + +Paths are specified as \[ga]remote:path\[ga] + Dropbox paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP +\[ga]remote:directory/subdirectory\[ga]. + +## Configuration + The initial setup for dropbox involves getting a token from Dropbox -which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config +which you need to do in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Dropbox - \[rs] \[dq]dropbox\[dq] -[snip] -Storage> dropbox -Dropbox App Key - leave blank normally. -app_key> -Dropbox App Secret - leave blank normally. -app_secret> -Remote config -Please visit: +.IP "n)" 3 +New remote +.IP "o)" 3 +Delete remote +.IP "p)" 3 +Quit config e/n/d/q> n name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +Dropbox \ \[dq]dropbox\[dq] [snip] Storage> dropbox Dropbox App Key - +leave blank normally. +app_key> Dropbox App Secret - leave blank normally. +app_secret> Remote config Please visit: https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX +-------------------- [remote] app_key = app_secret = token = +XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -------------------- -[remote] -app_key = -app_secret = -token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP +.IP "q)" 3 +Yes this is OK +.IP "r)" 3 +Edit this remote +.IP "s)" 3 +Delete this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Dropbox. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and it may require you to -unblock it temporarily if you are running a host firewall, or use manual -mode. -.PP +token as returned from Dropbox. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and it +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + You can then use it like this, -.PP + List directories in top level of your dropbox -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your dropbox -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to a dropbox directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Dropbox for business -.PP + + rclone copy /home/source remote:backup + +### Dropbox for business + Rclone supports Dropbox for business and Team Folders. -.PP -When using Dropbox for business \f[C]remote:\f[R] and -\f[C]remote:path/to/file\f[R] will refer to your personal folder. -.PP -If you wish to see Team Folders you must use a leading \f[C]/\f[R] in -the path, so \f[C]rclone lsd remote:/\f[R] will refer to the root and -show you all Team Folders and your User Folder. -.PP -You can then use team folders like this \f[C]remote:/TeamFolder\f[R] and -\f[C]remote:/TeamFolder/path/to/file\f[R]. -.PP -A leading \f[C]/\f[R] for a Dropbox personal account will do nothing, -but it will take an extra HTTP transaction so it should be avoided. -.SS Modified time and Hashes -.PP -Dropbox supports modified times, but the only way to set a modification -time is to re-upload the file. -.PP + +When using Dropbox for business \[ga]remote:\[ga] and \[ga]remote:path/to/file\[ga] +will refer to your personal folder. + +If you wish to see Team Folders you must use a leading \[ga]/\[ga] in the +path, so \[ga]rclone lsd remote:/\[ga] will refer to the root and show you all +Team Folders and your User Folder. + +You can then use team folders like this \[ga]remote:/TeamFolder\[ga] and +\[ga]remote:/TeamFolder/path/to/file\[ga]. + +A leading \[ga]/\[ga] for a Dropbox personal account will do nothing, but it +will take an extra HTTP transaction so it should be avoided. + +### Modified time and Hashes + +Dropbox supports modified times, but the only way to set a +modification time is to re-upload the file. + This means that if you uploaded your data with an older version of -rclone which didn\[aq]t support the v2 API and modified times, rclone -will decide to upload all your old data to fix the modification times. -If you don\[aq]t want this to happen use \f[C]--size-only\f[R] or -\f[C]--checksum\f[R] flag to stop it. -.PP -Dropbox supports its own hash -type (https://www.dropbox.com/developers/reference/content-hash) which +rclone which didn\[aq]t support the v2 API and modified times, rclone will +decide to upload all your old data to fix the modification times. If +you don\[aq]t want this to happen use \[ga]--size-only\[ga] or \[ga]--checksum\[ga] flag +to stop it. + +Dropbox supports [its own hash +type](https://www.dropbox.com/developers/reference/content-hash) which is checked for all transfers. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -T{ -DEL -T}@T{ -0x7F -T}@T{ -\[u2421] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP + +### Restricted filename characters + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| NUL | 0x00 | \[u2400] | +| / | 0x2F | \[uFF0F] | +| DEL | 0x7F | \[u2421] | +| \[rs] | 0x5C | \[uFF3C] | + File names can also not end with the following characters. These only get replaced if they are the last character in the name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Batch mode uploads -.PP + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Batch mode uploads {#batch-mode} + Using batch mode uploads is very important for performance when using -the Dropbox API. -See the dropbox performance -guide (https://developers.dropbox.com/dbx-performance-guide) for more -info. -.PP +the Dropbox API. See [the dropbox performance guide](https://developers.dropbox.com/dbx-performance-guide) +for more info. + There are 3 modes rclone can use for uploads. -.SS --dropbox-batch-mode off -.PP -In this mode rclone will not use upload batching. -This was the default before rclone v1.55. -It has the disadvantage that it is very likely to encounter -\f[C]too_many_requests\f[R] errors like this -.IP -.nf -\f[C] -NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. -\f[R] -.fi -.PP + +#### --dropbox-batch-mode off + +In this mode rclone will not use upload batching. This was the default +before rclone v1.55. It has the disadvantage that it is very likely to +encounter \[ga]too_many_requests\[ga] errors like this + + NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. + When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers. -.PP -This will happen especially if \f[C]--transfers\f[R] is large, so this -mode isn\[aq]t recommended except for compatibility or investigating -problems. -.SS --dropbox-batch-mode sync -.PP + +This will happen especially if \[ga]--transfers\[ga] is large, so this mode +isn\[aq]t recommended except for compatibility or investigating problems. + +#### --dropbox-batch-mode sync + In this mode rclone will batch up uploads to the size specified by -\f[C]--dropbox-batch-size\f[R] and commit them together. -.PP -Using this mode means you can use a much higher \f[C]--transfers\f[R] -parameter (32 or 64 works fine) without receiving -\f[C]too_many_requests\f[R] errors. -.PP +\[ga]--dropbox-batch-size\[ga] and commit them together. + +Using this mode means you can use a much higher \[ga]--transfers\[ga] +parameter (32 or 64 works fine) without receiving \[ga]too_many_requests\[ga] +errors. + This mode ensures full data integrity. -.PP + Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode. -.SS --dropbox-batch-mode async -.PP + +#### --dropbox-batch-mode async + In this mode rclone will batch up uploads to the size specified by -\f[C]--dropbox-batch-size\f[R] and commit them together. -.PP +\[ga]--dropbox-batch-size\[ga] and commit them together. + However it will not wait for the status of the batch to be returned to -the caller. -This means rclone can use a much bigger batch size (much bigger than -\f[C]--transfers\f[R]), at the cost of not being able to check the +the caller. This means rclone can use a much bigger batch size (much +bigger than \[ga]--transfers\[ga]), at the cost of not being able to check the status of the upload. -.PP -This provides the maximum possible upload speed especially with lots of -small files, however rclone can\[aq]t check the file got uploaded + +This provides the maximum possible upload speed especially with lots +of small files, however rclone can\[aq]t check the file got uploaded properly using this mode. -.PP + If you are using this mode then using \[dq]rclone check\[dq] after the -transfer completes is recommended. -Or you could do an initial transfer with -\f[C]--dropbox-batch-mode async\f[R] then do a final transfer with -\f[C]--dropbox-batch-mode sync\f[R] (the default). -.PP +transfer completes is recommended. Or you could do an initial transfer +with \[ga]--dropbox-batch-mode async\[ga] then do a final transfer with +\[ga]--dropbox-batch-mode sync\[ga] (the default). + Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode. -.SS Standard options -.PP + + + +### Standard options + Here are the Standard options specific to dropbox (Dropbox). -.SS --dropbox-client-id -.PP + +#### --dropbox-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --dropbox-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_DROPBOX_CLIENT_ID +- Type: string +- Required: false + +#### --dropbox-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP + +- Config: client_secret +- Env Var: RCLONE_DROPBOX_CLIENT_SECRET +- Type: string +- Required: false + +### Advanced options + Here are the Advanced options specific to dropbox (Dropbox). -.SS --dropbox-token -.PP + +#### --dropbox-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --dropbox-auth-url -.PP + +- Config: token +- Env Var: RCLONE_DROPBOX_TOKEN +- Type: string +- Required: false + +#### --dropbox-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --dropbox-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_DROPBOX_AUTH_URL +- Type: string +- Required: false + +#### --dropbox-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --dropbox-chunk-size -.PP + +- Config: token_url +- Env Var: RCLONE_DROPBOX_TOKEN_URL +- Type: string +- Required: false + +#### --dropbox-chunk-size + Upload chunk size (< 150Mi). -.PP + Any files larger than this will be uploaded in chunks of this size. -.PP + Note that chunks are buffered in memory (one at a time) so rclone can -deal with retries. -Setting this larger will increase the speed slightly (at most 10% for -128 MiB in tests) at the cost of using more memory. -It can be set smaller if you are tight on memory. -.PP +deal with retries. Setting this larger will increase the speed +slightly (at most 10% for 128 MiB in tests) at the cost of using more +memory. It can be set smaller if you are tight on memory. + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 48Mi -.SS --dropbox-impersonate -.PP + +- Config: chunk_size +- Env Var: RCLONE_DROPBOX_CHUNK_SIZE +- Type: SizeSuffix +- Default: 48Mi + +#### --dropbox-impersonate + Impersonate this user when using a business account. -.PP -Note that if you want to use impersonate, you should make sure this flag -is set when running \[dq]rclone config\[dq] as this will cause rclone to -request the \[dq]members.read\[dq] scope which it won\[aq]t normally. -This is needed to lookup a members email address into the internal ID -that dropbox uses in the API. -.PP + +Note that if you want to use impersonate, you should make sure this +flag is set when running \[dq]rclone config\[dq] as this will cause rclone to +request the \[dq]members.read\[dq] scope which it won\[aq]t normally. This is +needed to lookup a members email address into the internal ID that +dropbox uses in the API. + Using the \[dq]members.read\[dq] scope will require a Dropbox Team Admin to approve during the OAuth flow. -.PP + You will have to use your own App (setting your own client_id and -client_secret) to use this option as currently rclone\[aq]s default set -of permissions doesn\[aq]t include \[dq]members.read\[dq]. -This can be added once v1.55 or later is in use everywhere. -.PP +client_secret) to use this option as currently rclone\[aq]s default set of +permissions doesn\[aq]t include \[dq]members.read\[dq]. This can be added once +v1.55 or later is in use everywhere. + + Properties: -.IP \[bu] 2 -Config: impersonate -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_IMPERSONATE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --dropbox-shared-files -.PP + +- Config: impersonate +- Env Var: RCLONE_DROPBOX_IMPERSONATE +- Type: string +- Required: false + +#### --dropbox-shared-files + Instructs rclone to work on individual shared files. -.PP -In this mode rclone\[aq]s features are extremely limited - only list -(ls, lsl, etc.) operations and read operations (e.g. -downloading) are supported in this mode. + +In this mode rclone\[aq]s features are extremely limited - only list (ls, lsl, etc.) +operations and read operations (e.g. downloading) are supported in this mode. All other operations will be disabled. -.PP + Properties: -.IP \[bu] 2 -Config: shared_files -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_SHARED_FILES -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --dropbox-shared-folders -.PP + +- Config: shared_files +- Env Var: RCLONE_DROPBOX_SHARED_FILES +- Type: bool +- Default: false + +#### --dropbox-shared-folders + Instructs rclone to work on shared folders. -.PP -When this flag is used with no path only the List operation is supported -and all available shared folders will be listed. -If you specify a path the first part will be interpreted as the name of + +When this flag is used with no path only the List operation is supported and +all available shared folders will be listed. If you specify a path the first part +will be interpreted as the name of shared folder. Rclone will then try to mount this +shared to the root namespace. On success shared folder rclone proceeds normally. +The shared folder is now pretty much a normal folder and all normal operations +are supported. + +Note that we don\[aq]t unmount the shared folder afterwards so the +--dropbox-shared-folders can be omitted after the first use of a particular shared folder. -Rclone will then try to mount this shared to the root namespace. -On success shared folder rclone proceeds normally. -The shared folder is now pretty much a normal folder and all normal -operations are supported. -.PP -Note that we don\[aq]t unmount the shared folder afterwards so the ---dropbox-shared-folders can be omitted after the first use of a -particular shared folder. -.PP + Properties: -.IP \[bu] 2 -Config: shared_folders -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_SHARED_FOLDERS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --dropbox-batch-mode -.PP + +- Config: shared_folders +- Env Var: RCLONE_DROPBOX_SHARED_FOLDERS +- Type: bool +- Default: false + +#### --dropbox-batch-mode + Upload file batching sync|async|off. -.PP + This sets the batch mode used by rclone. -.PP -For full info see the main docs (https://rclone.org/dropbox/#batch-mode) -.PP + +For full info see [the main docs](https://rclone.org/dropbox/#batch-mode) + This has 3 possible values -.IP \[bu] 2 -off - no batching -.IP \[bu] 2 -sync - batch uploads and check completion (default) -.IP \[bu] 2 -async - batch upload and don\[aq]t check completion -.PP -Rclone will close any outstanding batches when it exits which may make a -delay on quit. -.PP + +- off - no batching +- sync - batch uploads and check completion (default) +- async - batch upload and don\[aq]t check completion + +Rclone will close any outstanding batches when it exits which may make +a delay on quit. + + Properties: -.IP \[bu] 2 -Config: batch_mode -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_BATCH_MODE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]sync\[dq] -.SS --dropbox-batch-size -.PP + +- Config: batch_mode +- Env Var: RCLONE_DROPBOX_BATCH_MODE +- Type: string +- Default: \[dq]sync\[dq] + +#### --dropbox-batch-size + Max number of files in upload batch. -.PP -This sets the batch size of files to upload. -It has to be less than 1000. -.PP + +This sets the batch size of files to upload. It has to be less than 1000. + By default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode. -.IP \[bu] 2 -batch_mode: async - default batch_size is 100 -.IP \[bu] 2 -batch_mode: sync - default batch_size is the same as --transfers -.IP \[bu] 2 -batch_mode: off - not in use -.PP -Rclone will close any outstanding batches when it exits which may make a -delay on quit. -.PP -Setting this is a great idea if you are uploading lots of small files as -it will make them a lot quicker. -You can use --transfers 32 to maximise throughput. -.PP + +- batch_mode: async - default batch_size is 100 +- batch_mode: sync - default batch_size is the same as --transfers +- batch_mode: off - not in use + +Rclone will close any outstanding batches when it exits which may make +a delay on quit. + +Setting this is a great idea if you are uploading lots of small files +as it will make them a lot quicker. You can use --transfers 32 to +maximise throughput. + + Properties: -.IP \[bu] 2 -Config: batch_size -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_BATCH_SIZE -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 0 -.SS --dropbox-batch-timeout -.PP + +- Config: batch_size +- Env Var: RCLONE_DROPBOX_BATCH_SIZE +- Type: int +- Default: 0 + +#### --dropbox-batch-timeout + Max time to allow an idle upload batch before uploading. -.PP + If an upload batch is idle for more than this long then it will be uploaded. -.PP + The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use. -.IP \[bu] 2 -batch_mode: async - default batch_timeout is 10s -.IP \[bu] 2 -batch_mode: sync - default batch_timeout is 500ms -.IP \[bu] 2 -batch_mode: off - not in use -.PP + +- batch_mode: async - default batch_timeout is 10s +- batch_mode: sync - default batch_timeout is 500ms +- batch_mode: off - not in use + + Properties: -.IP \[bu] 2 -Config: batch_timeout -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 0s -.SS --dropbox-batch-commit-timeout -.PP + +- Config: batch_timeout +- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT +- Type: Duration +- Default: 0s + +#### --dropbox-batch-commit-timeout + Max time to wait for a batch to finish committing -.PP + Properties: -.IP \[bu] 2 -Config: batch_commit_timeout -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 10m0s -.SS --dropbox-pacer-min-sleep -.PP + +- Config: batch_commit_timeout +- Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT +- Type: Duration +- Default: 10m0s + +#### --dropbox-pacer-min-sleep + Minimum time to sleep between API calls. -.PP + Properties: -.IP \[bu] 2 -Config: pacer_min_sleep -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 10ms -.SS --dropbox-encoding -.PP + +- Config: pacer_min_sleep +- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP +- Type: Duration +- Default: 10ms + +#### --dropbox-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_DROPBOX_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot -.SS Limitations -.PP -Note that Dropbox is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -There are some file names such as \f[C]thumbs.db\f[R] which Dropbox -can\[aq]t store. -There is a full list of them in the \[dq]Ignored Files\[dq] section of -this document (https://www.dropbox.com/en/help/145). -Rclone will issue an error message -\f[C]File name disallowed - not uploading\f[R] if it attempts to upload -one of those file names, but the sync won\[aq]t fail. -.PP + +- Config: encoding +- Env Var: RCLONE_DROPBOX_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot + + + +## Limitations + +Note that Dropbox is case insensitive so you can\[aq]t have a file called +\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + +There are some file names such as \[ga]thumbs.db\[ga] which Dropbox can\[aq]t +store. There is a full list of them in the [\[dq]Ignored Files\[dq] section +of this document](https://www.dropbox.com/en/help/145). Rclone will +issue an error message \[ga]File name disallowed - not uploading\[ga] if it +attempts to upload one of those file names, but the sync won\[aq]t fail. + Some errors may occur if you try to sync copyright-protected files -because Dropbox has its own copyright -detector (https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) -that prevents this sort of file being downloaded. -This will return the error -\f[C]ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.\f[R] -.PP -If you have more than 10,000 files in a directory then -\f[C]rclone purge dropbox:dir\f[R] will return the error -\f[C]Failed to purge: There are too many files involved in this operation\f[R]. -As a work-around do an \f[C]rclone delete dropbox:dir\f[R] followed by -an \f[C]rclone rmdir dropbox:dir\f[R]. -.PP -When using \f[C]rclone link\f[R] you\[aq]ll need to set -\f[C]--expire\f[R] if using a non-personal account otherwise the -visibility may not be correct. -(Note that \f[C]--expire\f[R] isn\[aq]t supported on personal accounts). -See the forum -discussion (https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) -and the dropbox SDK -issue (https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75). -.SS Get your own Dropbox App ID -.PP -When you use rclone with Dropbox in its default configuration you are -using rclone\[aq]s App ID. -This is shared between all the rclone users. -.PP +because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that +prevents this sort of file being downloaded. This will return the error \[ga]ERROR : +/path/to/your/file: Failed to copy: failed to open source object: +path/restricted_content/.\[ga] + +If you have more than 10,000 files in a directory then \[ga]rclone purge +dropbox:dir\[ga] will return the error \[ga]Failed to purge: There are too +many files involved in this operation\[ga]. As a work-around do an +\[ga]rclone delete dropbox:dir\[ga] followed by an \[ga]rclone rmdir dropbox:dir\[ga]. + +When using \[ga]rclone link\[ga] you\[aq]ll need to set \[ga]--expire\[ga] if using a +non-personal account otherwise the visibility may not be correct. +(Note that \[ga]--expire\[ga] isn\[aq]t supported on personal accounts). See the +[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the +[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75). + +## Get your own Dropbox App ID + +When you use rclone with Dropbox in its default configuration you are using rclone\[aq]s App ID. This is shared between all the rclone users. + Here is how to create your own Dropbox App ID for rclone: -.IP "1." 3 -Log into the Dropbox App -console (https://www.dropbox.com/developers/apps/create) with your -Dropbox Account (It need not to be the same account as the Dropbox you -want to access) -.IP "2." 3 -Choose an API => Usually this should be \f[C]Dropbox API\f[R] -.IP "3." 3 -Choose the type of access you want to use => \f[C]Full Dropbox\f[R] or -\f[C]App Folder\f[R] -.IP "4." 3 -Name your App. -The app name is global, so you can\[aq]t use \f[C]rclone\f[R] for -example -.IP "5." 3 -Click the button \f[C]Create App\f[R] -.IP "6." 3 -Switch to the \f[C]Permissions\f[R] tab. -Enable at least the following permissions: \f[C]account_info.read\f[R], -\f[C]files.metadata.write\f[R], \f[C]files.content.write\f[R], -\f[C]files.content.read\f[R], \f[C]sharing.write\f[R]. -The \f[C]files.metadata.read\f[R] and \f[C]sharing.read\f[R] checkboxes -will be marked too. -Click \f[C]Submit\f[R] -.IP "7." 3 -Switch to the \f[C]Settings\f[R] tab. -Fill \f[C]OAuth2 - Redirect URIs\f[R] as -\f[C]http://localhost:53682/\f[R] -.IP "8." 3 -Find the \f[C]App key\f[R] and \f[C]App secret\f[R] values on the -\f[C]Settings\f[R] tab. -Use these values in rclone config to add a new remote or edit an -existing remote. -The \f[C]App key\f[R] setting corresponds to \f[C]client_id\f[R] in -rclone config, the \f[C]App secret\f[R] corresponds to -\f[C]client_secret\f[R] -.SH Enterprise File Fabric -.PP -This backend supports Storage Made Easy\[aq]s Enterprise File -Fabric\[tm] (https://storagemadeeasy.com/about/) which provides a -software solution to integrate and unify File and Object Storage -accessible through a global file system. -.SS Configuration -.PP + +1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not +to be the same account as the Dropbox you want to access) + +2. Choose an API => Usually this should be \[ga]Dropbox API\[ga] + +3. Choose the type of access you want to use => \[ga]Full Dropbox\[ga] or \[ga]App Folder\[ga]. If you want to use Team Folders, \[ga]Full Dropbox\[ga] is required ([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)). + +4. Name your App. The app name is global, so you can\[aq]t use \[ga]rclone\[ga] for example + +5. Click the button \[ga]Create App\[ga] + +6. Switch to the \[ga]Permissions\[ga] tab. Enable at least the following permissions: \[ga]account_info.read\[ga], \[ga]files.metadata.write\[ga], \[ga]files.content.write\[ga], \[ga]files.content.read\[ga], \[ga]sharing.write\[ga]. The \[ga]files.metadata.read\[ga] and \[ga]sharing.read\[ga] checkboxes will be marked too. Click \[ga]Submit\[ga] + +7. Switch to the \[ga]Settings\[ga] tab. Fill \[ga]OAuth2 - Redirect URIs\[ga] as \[ga]http://localhost:53682/\[ga] and click on \[ga]Add\[ga] + +8. Find the \[ga]App key\[ga] and \[ga]App secret\[ga] values on the \[ga]Settings\[ga] tab. Use these values in rclone config to add a new remote or edit an existing remote. The \[ga]App key\[ga] setting corresponds to \[ga]client_id\[ga] in rclone config, the \[ga]App secret\[ga] corresponds to \[ga]client_secret\[ga] + +# Enterprise File Fabric + +This backend supports [Storage Made Easy\[aq]s Enterprise File +Fabric\[tm]](https://storagemadeeasy.com/about/) which provides a software +solution to integrate and unify File and Object Storage accessible +through a global file system. + +## Configuration + The initial setup for the Enterprise File Fabric backend involves -getting a token from the Enterprise File Fabric which you need to do in -your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +getting a token from the Enterprise File Fabric which you need to +do in your browser. \[ga]rclone config\[ga] walks you through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Enterprise File Fabric - \[rs] \[dq]filefabric\[dq] -[snip] -Storage> filefabric +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Enterprise File Fabric \ \[dq]filefabric\[dq] [snip] Storage> filefabric ** See help for filefabric backend at: https://rclone.org/filefabric/ ** - -URL of the Enterprise File Fabric to connect to -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Storage Made Easy US - \[rs] \[dq]https://storagemadeeasy.com\[dq] - 2 / Storage Made Easy EU - \[rs] \[dq]https://eu.storagemadeeasy.com\[dq] - 3 / Connect to your Enterprise File Fabric - \[rs] \[dq]https://yourfabric.smestorage.com\[dq] -url> https://yourfabric.smestorage.com/ -ID of the root folder -Leave blank normally. - -Fill in to make rclone start with directory of a given ID. - -Enter a string value. Press Enter for the default (\[dq]\[dq]). -root_folder_id> -Permanent Authentication Token - -A Permanent Authentication Token can be created in the Enterprise File -Fabric, on the users Dashboard under Security, there is an entry -you\[aq]ll see called \[dq]My Authentication Tokens\[dq]. Click the Manage button -to create one. - -These tokens are normally valid for several years. - -For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens - -Enter a string value. Press Enter for the default (\[dq]\[dq]). -permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[remote] -type = filefabric -url = https://yourfabric.smestorage.com/ -permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi .PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your Enterprise File Fabric -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your Enterprise File Fabric -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to an Enterprise File Fabric directory called -backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modified time and hashes -.PP -The Enterprise File Fabric allows modification times to be set on files -accurate to 1 second. -These will be used to detect whether objects need syncing or not. -.PP -The Enterprise File Fabric does not support any data hashes at this -time. -.SS Restricted filename characters -.PP -The default restricted characters -set (https://rclone.org/overview/#restricted-characters) will be -replaced. -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Empty files -.PP -Empty files aren\[aq]t supported by the Enterprise File Fabric. -Rclone will therefore upload an empty file as a single space with a mime -type of \f[C]application/vnd.rclone.empty.file\f[R] and files with that -mime type are treated as empty. -.SS Root folder ID -.PP -You can set the \f[C]root_folder_id\f[R] for rclone. -This is the directory (identified by its \f[C]Folder ID\f[R]) that -rclone considers to be the root of your Enterprise File Fabric. -.PP -Normally you will leave this blank and rclone will determine the correct -root to use itself. -.PP -However you can set this to restrict rclone to a specific folder -hierarchy. -.PP -In order to do this you will have to find the \f[C]Folder ID\f[R] of the -directory you wish rclone to display. -These aren\[aq]t displayed in the web interface, but you can use -\f[C]rclone lsf\f[R] to find them, for example -.IP -.nf -\f[C] -$ rclone lsf --dirs-only -Fip --csv filefabric: -120673758,Burnt PDFs/ -120673759,My Quick Uploads/ -120673755,My Syncs/ -120673756,My backups/ -120673757,My contacts/ -120673761,S3 Storage/ -\f[R] -.fi -.PP -The ID for \[dq]S3 Storage\[dq] would be \f[C]120673761\f[R]. -.SS Standard options -.PP -Here are the Standard options specific to filefabric (Enterprise File -Fabric). -.SS --filefabric-url -.PP -URL of the Enterprise File Fabric to connect to. -.PP -Properties: -.IP \[bu] 2 -Config: url -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]https://storagemadeeasy.com\[dq] -.RS 2 -.IP \[bu] 2 -Storage Made Easy US -.RE -.IP \[bu] 2 -\[dq]https://eu.storagemadeeasy.com\[dq] -.RS 2 -.IP \[bu] 2 -Storage Made Easy EU -.RE -.IP \[bu] 2 -\[dq]https://yourfabric.smestorage.com\[dq] -.RS 2 -.IP \[bu] 2 -Connect to your Enterprise File Fabric -.RE -.RE -.SS --filefabric-root-folder-id -.PP -ID of the root folder. -.PP -Leave blank normally. +URL of the Enterprise File Fabric to connect to Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Storage Made +Easy US \ \[dq]https://storagemadeeasy.com\[dq] 2 / Storage Made Easy EU +\ \[dq]https://eu.storagemadeeasy.com\[dq] 3 / Connect to your +Enterprise File Fabric \ \[dq]https://yourfabric.smestorage.com\[dq] +url> https://yourfabric.smestorage.com/ ID of the root folder Leave +blank normally. .PP Fill in to make rclone start with directory of a given ID. .PP -Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --filefabric-permanent-token -.PP -Permanent Authentication Token. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +root_folder_id> Permanent Authentication Token .PP A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry @@ -38593,4037 +40584,3011 @@ These tokens are normally valid for several years. For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens .PP -Properties: -.IP \[bu] 2 -Config: permanent_token -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP -Here are the Advanced options specific to filefabric (Enterprise File -Fabric). -.SS --filefabric-token -.PP -Session Token. -.PP -This is a session token which rclone caches in the config file. -It is usually valid for 1 hour. -.PP -Don\[aq]t set this value - rclone will set it automatically. -.PP -Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --filefabric-token-expiry -.PP -Token expiry time. -.PP -Don\[aq]t set this value - rclone will set it automatically. -.PP -Properties: -.IP \[bu] 2 -Config: token_expiry -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --filefabric-version -.PP -Version read from the file fabric. -.PP -Don\[aq]t set this value - rclone will set it automatically. -.PP -Properties: -.IP \[bu] 2 -Config: version -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_VERSION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --filefabric-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_FILEFABRIC_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Del,Ctl,InvalidUtf8,Dot -.SH FTP -.PP -FTP is the File Transfer Protocol. -Rclone FTP support is provided using the -github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp) -package. -.PP -Limitations of Rclone\[aq]s FTP backend -.PP -Paths are specified as \f[C]remote:path\f[R]. -If the path does not begin with a \f[C]/\f[R] it is relative to the home -directory of the user. -An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory. -.SS Configuration -.PP -To create an FTP configuration named \f[C]remote\f[R], run -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP -Rclone config guides you through an interactive setup process. -A minimal rclone FTP remote definition only requires host, username and -password. -For an anonymous FTP server, see below. -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / FTP - \[rs] \[dq]ftp\[dq] -[snip] -Storage> ftp -** See help for ftp backend at: https://rclone.org/ftp/ ** - -FTP host to connect to -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Connect to ftp.example.com - \[rs] \[dq]ftp.example.com\[dq] -host> ftp.example.com -FTP username -Enter a string value. Press Enter for the default (\[dq]$USER\[dq]). -user> -FTP port number -Enter a signed integer. Press Enter for the default (21). -port> -FTP password -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Use FTP over TLS (Implicit) -Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). -tls> -Use FTP over TLS (Explicit) -Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). -explicit_tls> -Remote config --------------------- -[remote] -type = ftp -host = ftp.example.com -pass = *** ENCRYPTED *** --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +permanent_token> xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx Edit advanced config? +(y/n) y) Yes n) No (default) y/n> n Remote config -------------------- +[remote] type = filefabric url = https://yourfabric.smestorage.com/ +permanent_token = xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx -------------------- +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -\f[R] -.fi -.PP -To see all directories in the home directory of \f[C]remote\f[R] .IP .nf \f[C] -rclone lsd remote: +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your Enterprise File Fabric + + rclone lsd remote: + +List all the files in your Enterprise File Fabric + + rclone ls remote: + +To copy a local directory to an Enterprise File Fabric directory called backup + + rclone copy /home/source remote:backup + +### Modified time and hashes + +The Enterprise File Fabric allows modification times to be set on +files accurate to 1 second. These will be used to detect whether +objects need syncing or not. + +The Enterprise File Fabric does not support any data hashes at this time. + +### Restricted filename characters + +The [default restricted characters set](https://rclone.org/overview/#restricted-characters) +will be replaced. + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Empty files + +Empty files aren\[aq]t supported by the Enterprise File Fabric. Rclone will therefore +upload an empty file as a single space with a mime type of +\[ga]application/vnd.rclone.empty.file\[ga] and files with that mime type are +treated as empty. + +### Root folder ID ### + +You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory +(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root +of your Enterprise File Fabric. + +Normally you will leave this blank and rclone will determine the +correct root to use itself. + +However you can set this to restrict rclone to a specific folder +hierarchy. + +In order to do this you will have to find the \[ga]Folder ID\[ga] of the +directory you wish rclone to display. These aren\[aq]t displayed in the +web interface, but you can use \[ga]rclone lsf\[ga] to find them, for example \f[R] .fi .PP +$ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ +120673759,My Quick Uploads/ 120673755,My Syncs/ 120673756,My backups/ +120673757,My contacts/ 120673761,S3 Storage/ +.IP +.nf +\f[C] +The ID for \[dq]S3 Storage\[dq] would be \[ga]120673761\[ga]. + + +### Standard options + +Here are the Standard options specific to filefabric (Enterprise File Fabric). + +#### --filefabric-url + +URL of the Enterprise File Fabric to connect to. + +Properties: + +- Config: url +- Env Var: RCLONE_FILEFABRIC_URL +- Type: string +- Required: true +- Examples: + - \[dq]https://storagemadeeasy.com\[dq] + - Storage Made Easy US + - \[dq]https://eu.storagemadeeasy.com\[dq] + - Storage Made Easy EU + - \[dq]https://yourfabric.smestorage.com\[dq] + - Connect to your Enterprise File Fabric + +#### --filefabric-root-folder-id + +ID of the root folder. + +Leave blank normally. + +Fill in to make rclone start with directory of a given ID. + + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID +- Type: string +- Required: false + +#### --filefabric-permanent-token + +Permanent Authentication Token. + +A Permanent Authentication Token can be created in the Enterprise File +Fabric, on the users Dashboard under Security, there is an entry +you\[aq]ll see called \[dq]My Authentication Tokens\[dq]. Click the Manage button +to create one. + +These tokens are normally valid for several years. + +For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens + + +Properties: + +- Config: permanent_token +- Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to filefabric (Enterprise File Fabric). + +#### --filefabric-token + +Session Token. + +This is a session token which rclone caches in the config file. It is +usually valid for 1 hour. + +Don\[aq]t set this value - rclone will set it automatically. + + +Properties: + +- Config: token +- Env Var: RCLONE_FILEFABRIC_TOKEN +- Type: string +- Required: false + +#### --filefabric-token-expiry + +Token expiry time. + +Don\[aq]t set this value - rclone will set it automatically. + + +Properties: + +- Config: token_expiry +- Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY +- Type: string +- Required: false + +#### --filefabric-version + +Version read from the file fabric. + +Don\[aq]t set this value - rclone will set it automatically. + + +Properties: + +- Config: version +- Env Var: RCLONE_FILEFABRIC_VERSION +- Type: string +- Required: false + +#### --filefabric-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_FILEFABRIC_ENCODING +- Type: MultiEncoder +- Default: Slash,Del,Ctl,InvalidUtf8,Dot + + + +# FTP + +FTP is the File Transfer Protocol. Rclone FTP support is provided using the +[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) +package. + +[Limitations of Rclone\[aq]s FTP backend](#limitations) + +Paths are specified as \[ga]remote:path\[ga]. If the path does not begin with +a \[ga]/\[ga] it is relative to the home directory of the user. An empty path +\[ga]remote:\[ga] refers to the user\[aq]s home directory. + +## Configuration + +To create an FTP configuration named \[ga]remote\[ga], run + + rclone config + +Rclone config guides you through an interactive setup process. A minimal +rclone FTP remote definition only requires host, username and password. +For an anonymous FTP server, see [below](#anonymous-ftp). +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote r) Rename remote c) Copy remote s) Set configuration +password q) Quit config n/r/c/s/q> n name> remote Type of storage to +configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / FTP +\ \[dq]ftp\[dq] [snip] Storage> ftp ** See help for ftp backend at: +https://rclone.org/ftp/ ** +.PP +FTP host to connect to Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Connect to +ftp.example.com \ \[dq]ftp.example.com\[dq] host> ftp.example.com FTP +username Enter a string value. +Press Enter for the default (\[dq]$USER\[dq]). +user> FTP port number Enter a signed integer. +Press Enter for the default (21). +port> FTP password y) Yes type in my own password g) Generate random +password y/g> y Enter the password: password: Confirm the password: +password: Use FTP over TLS (Implicit) Enter a boolean value (true or +false). +Press Enter for the default (\[dq]false\[dq]). +tls> Use FTP over TLS (Explicit) Enter a boolean value (true or false). +Press Enter for the default (\[dq]false\[dq]). +explicit_tls> Remote config -------------------- [remote] type = ftp +host = ftp.example.com pass = *** ENCRYPTED *** -------------------- y) +Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +To see all directories in the home directory of \[ga]remote\[ga] + + rclone lsd remote: + Make a new directory -.IP -.nf -\f[C] -rclone mkdir remote:path/to/directory -\f[R] -.fi -.PP + + rclone mkdir remote:path/to/directory + List the contents of a directory -.IP -.nf -\f[C] -rclone ls remote:path/to/directory -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote directory, deleting -any excess files in the directory. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:directory -\f[R] -.fi -.SS Anonymous FTP -.PP -When connecting to a FTP server that allows anonymous login, you can use -the special \[dq]anonymous\[dq] username. -Traditionally, this user account accepts any string as a password, -although it is common to use either the password \[dq]anonymous\[dq] or -\[dq]guest\[dq]. -Some servers require the use of a valid e-mail address as password. -.PP -Using on-the-fly or connection -string (https://rclone.org/docs/#connection-strings) remotes makes it -easy to access such servers, without requiring any configuration in -advance. -The following are examples of that: -.IP -.nf -\f[C] -rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) -rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): -\f[R] -.fi -.PP -The above examples work in Linux shells and in PowerShell, but not -Windows Command Prompt. -They execute the rclone -obscure (https://rclone.org/commands/rclone_obscure/) command to create -a password string in the format required by the pass option. -The following examples are exactly the same, except use an already -obscured string representation of the same password \[dq]dummy\[dq], and + + rclone ls remote:path/to/directory + +Sync \[ga]/home/local/directory\[ga] to the remote directory, deleting any +excess files in the directory. + + rclone sync --interactive /home/local/directory remote:directory + +### Anonymous FTP + +When connecting to a FTP server that allows anonymous login, you can use the +special \[dq]anonymous\[dq] username. Traditionally, this user account accepts any +string as a password, although it is common to use either the password +\[dq]anonymous\[dq] or \[dq]guest\[dq]. Some servers require the use of a valid e-mail +address as password. + +Using [on-the-fly](#backend-path-to-dir) or +[connection string](https://rclone.org/docs/#connection-strings) remotes makes it easy to access +such servers, without requiring any configuration in advance. The following +are examples of that: + + rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) + rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): + +The above examples work in Linux shells and in PowerShell, but not Windows +Command Prompt. They execute the [rclone obscure](https://rclone.org/commands/rclone_obscure/) +command to create a password string in the format required by the +[pass](#ftp-pass) option. The following examples are exactly the same, except use +an already obscured string representation of the same password \[dq]dummy\[dq], and therefore works even in Windows Command Prompt: -.IP -.nf -\f[C] -rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM -rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: -\f[R] -.fi -.SS Implicit TLS -.PP -Rlone FTP supports implicit FTP over TLS servers (FTPS). -This has to be enabled in the FTP backend config for the remote, or with -\f[C]--ftp-tls\f[R]. -The default FTPS port is \f[C]990\f[R], not \f[C]21\f[R] and can be set -with \f[C]--ftp-port\f[R]. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -File names cannot end with the following characters. -Replacement is limited to the last character in a file name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -.TE -.PP + + rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM + rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: + +### Implicit TLS + +Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to +be enabled in the FTP backend config for the remote, or with +[\[ga]--ftp-tls\[ga]](#ftp-tls). The default FTPS port is \[ga]990\[ga], not \[ga]21\[ga] and +can be set with [\[ga]--ftp-port\[ga]](#ftp-port). + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +File names cannot end with the following characters. Replacement is +limited to the last character in a file name: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | + Not all FTP servers can have all characters in file names, for example: -.PP -.TS -tab(@); -l c. -T{ -FTP Server -T}@T{ -Forbidden characters -T} -_ -T{ -proftpd -T}@T{ -\f[C]*\f[R] -T} -T{ -pureftpd -T}@T{ -\f[C]\[rs] [ ]\f[R] -T} -.TE -.PP -This backend\[aq]s interactive configuration wizard provides a selection -of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, -VsFTPd. + +| FTP Server| Forbidden characters | +| --------- |:--------------------:| +| proftpd | \[ga]*\[ga] | +| pureftpd | \[ga]\[rs] [ ]\[ga] | + +This backend\[aq]s interactive configuration wizard provides a selection of +sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted. -.SS Standard options -.PP + + +### Standard options + Here are the Standard options specific to ftp (FTP). -.SS --ftp-host -.PP + +#### --ftp-host + FTP host to connect to. -.PP -E.g. -\[dq]ftp.example.com\[dq]. -.PP + +E.g. \[dq]ftp.example.com\[dq]. + Properties: -.IP \[bu] 2 -Config: host -.IP \[bu] 2 -Env Var: RCLONE_FTP_HOST -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --ftp-user -.PP + +- Config: host +- Env Var: RCLONE_FTP_HOST +- Type: string +- Required: true + +#### --ftp-user + FTP username. -.PP + Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_FTP_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]$USER\[dq] -.SS --ftp-port -.PP + +- Config: user +- Env Var: RCLONE_FTP_USER +- Type: string +- Default: \[dq]$USER\[dq] + +#### --ftp-port + FTP port number. -.PP + Properties: -.IP \[bu] 2 -Config: port -.IP \[bu] 2 -Env Var: RCLONE_FTP_PORT -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 21 -.SS --ftp-pass -.PP + +- Config: port +- Env Var: RCLONE_FTP_PORT +- Type: int +- Default: 21 + +#### --ftp-pass + FTP password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_FTP_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --ftp-tls -.PP + +- Config: pass +- Env Var: RCLONE_FTP_PASS +- Type: string +- Required: false + +#### --ftp-tls + Use Implicit FTPS (FTP over TLS). -.PP -When using implicit FTP over TLS the client connects using TLS right -from the start which breaks compatibility with non-TLS-aware servers. -This is usually served over port 990 rather than port 21. -Cannot be used in combination with explicit FTPS. -.PP + +When using implicit FTP over TLS the client connects using TLS +right from the start which breaks compatibility with +non-TLS-aware servers. This is usually served over port 990 rather +than port 21. Cannot be used in combination with explicit FTPS. + Properties: -.IP \[bu] 2 -Config: tls -.IP \[bu] 2 -Env Var: RCLONE_FTP_TLS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-explicit-tls -.PP + +- Config: tls +- Env Var: RCLONE_FTP_TLS +- Type: bool +- Default: false + +#### --ftp-explicit-tls + Use Explicit FTPS (FTP over TLS). -.PP -When using explicit FTP over TLS the client explicitly requests security -from the server in order to upgrade a plain text connection to an -encrypted one. -Cannot be used in combination with implicit FTPS. -.PP + +When using explicit FTP over TLS the client explicitly requests +security from the server in order to upgrade a plain text connection +to an encrypted one. Cannot be used in combination with implicit FTPS. + Properties: -.IP \[bu] 2 -Config: explicit_tls -.IP \[bu] 2 -Env Var: RCLONE_FTP_EXPLICIT_TLS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS Advanced options -.PP + +- Config: explicit_tls +- Env Var: RCLONE_FTP_EXPLICIT_TLS +- Type: bool +- Default: false + +### Advanced options + Here are the Advanced options specific to ftp (FTP). -.SS --ftp-concurrency -.PP + +#### --ftp-concurrency + Maximum number of FTP simultaneous connections, 0 for unlimited. -.PP -Note that setting this is very likely to cause deadlocks so it should be -used with care. -.PP + +Note that setting this is very likely to cause deadlocks so it should +be used with care. + If you are doing a sync or copy then make sure concurrency is one more -than the sum of \f[C]--transfers\f[R] and \f[C]--checkers\f[R]. -.PP -If you use \f[C]--check-first\f[R] then it just needs to be one more -than the maximum of \f[C]--checkers\f[R] and \f[C]--transfers\f[R]. -.PP -So for \f[C]concurrency 3\f[R] you\[aq]d use -\f[C]--checkers 2 --transfers 2 --check-first\f[R] or -\f[C]--checkers 1 --transfers 1\f[R]. -.PP +than the sum of \[ga]--transfers\[ga] and \[ga]--checkers\[ga]. + +If you use \[ga]--check-first\[ga] then it just needs to be one more than the +maximum of \[ga]--checkers\[ga] and \[ga]--transfers\[ga]. + +So for \[ga]concurrency 3\[ga] you\[aq]d use \[ga]--checkers 2 --transfers 2 +--check-first\[ga] or \[ga]--checkers 1 --transfers 1\[ga]. + + + Properties: -.IP \[bu] 2 -Config: concurrency -.IP \[bu] 2 -Env Var: RCLONE_FTP_CONCURRENCY -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 0 -.SS --ftp-no-check-certificate -.PP + +- Config: concurrency +- Env Var: RCLONE_FTP_CONCURRENCY +- Type: int +- Default: 0 + +#### --ftp-no-check-certificate + Do not verify the TLS certificate of the server. -.PP + Properties: -.IP \[bu] 2 -Config: no_check_certificate -.IP \[bu] 2 -Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-disable-epsv -.PP + +- Config: no_check_certificate +- Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE +- Type: bool +- Default: false + +#### --ftp-disable-epsv + Disable using EPSV even if server advertises support. -.PP + Properties: -.IP \[bu] 2 -Config: disable_epsv -.IP \[bu] 2 -Env Var: RCLONE_FTP_DISABLE_EPSV -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-disable-mlsd -.PP + +- Config: disable_epsv +- Env Var: RCLONE_FTP_DISABLE_EPSV +- Type: bool +- Default: false + +#### --ftp-disable-mlsd + Disable using MLSD even if server advertises support. -.PP + Properties: -.IP \[bu] 2 -Config: disable_mlsd -.IP \[bu] 2 -Env Var: RCLONE_FTP_DISABLE_MLSD -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-disable-utf8 -.PP + +- Config: disable_mlsd +- Env Var: RCLONE_FTP_DISABLE_MLSD +- Type: bool +- Default: false + +#### --ftp-disable-utf8 + Disable using UTF-8 even if server advertises support. -.PP + Properties: -.IP \[bu] 2 -Config: disable_utf8 -.IP \[bu] 2 -Env Var: RCLONE_FTP_DISABLE_UTF8 -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-writing-mdtm -.PP + +- Config: disable_utf8 +- Env Var: RCLONE_FTP_DISABLE_UTF8 +- Type: bool +- Default: false + +#### --ftp-writing-mdtm + Use MDTM to set modification time (VsFtpd quirk) -.PP + Properties: -.IP \[bu] 2 -Config: writing_mdtm -.IP \[bu] 2 -Env Var: RCLONE_FTP_WRITING_MDTM -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-force-list-hidden -.PP -Use LIST -a to force listing of hidden files and folders. -This will disable the use of MLSD. -.PP + +- Config: writing_mdtm +- Env Var: RCLONE_FTP_WRITING_MDTM +- Type: bool +- Default: false + +#### --ftp-force-list-hidden + +Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD. + Properties: -.IP \[bu] 2 -Config: force_list_hidden -.IP \[bu] 2 -Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-idle-timeout -.PP + +- Config: force_list_hidden +- Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN +- Type: bool +- Default: false + +#### --ftp-idle-timeout + Max time before closing idle connections. -.PP + If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool. -.PP + Set to 0 to keep connections indefinitely. -.PP + + Properties: -.IP \[bu] 2 -Config: idle_timeout -.IP \[bu] 2 -Env Var: RCLONE_FTP_IDLE_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --ftp-close-timeout -.PP + +- Config: idle_timeout +- Env Var: RCLONE_FTP_IDLE_TIMEOUT +- Type: Duration +- Default: 1m0s + +#### --ftp-close-timeout + Maximum time to wait for a response to close. -.PP + Properties: -.IP \[bu] 2 -Config: close_timeout -.IP \[bu] 2 -Env Var: RCLONE_FTP_CLOSE_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --ftp-tls-cache-size -.PP + +- Config: close_timeout +- Env Var: RCLONE_FTP_CLOSE_TIMEOUT +- Type: Duration +- Default: 1m0s + +#### --ftp-tls-cache-size + Size of TLS session cache for all control and data connections. -.PP -TLS cache allows to resume TLS sessions and reuse PSK between -connections. -Increase if default size is not enough resulting in TLS resumption -errors. -Enabled by default. -Use 0 to disable. -.PP + +TLS cache allows to resume TLS sessions and reuse PSK between connections. +Increase if default size is not enough resulting in TLS resumption errors. +Enabled by default. Use 0 to disable. + Properties: -.IP \[bu] 2 -Config: tls_cache_size -.IP \[bu] 2 -Env Var: RCLONE_FTP_TLS_CACHE_SIZE -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 32 -.SS --ftp-disable-tls13 -.PP + +- Config: tls_cache_size +- Env Var: RCLONE_FTP_TLS_CACHE_SIZE +- Type: int +- Default: 32 + +#### --ftp-disable-tls13 + Disable TLS 1.3 (workaround for FTP servers with buggy TLS) -.PP + Properties: -.IP \[bu] 2 -Config: disable_tls13 -.IP \[bu] 2 -Env Var: RCLONE_FTP_DISABLE_TLS13 -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-shut-timeout -.PP + +- Config: disable_tls13 +- Env Var: RCLONE_FTP_DISABLE_TLS13 +- Type: bool +- Default: false + +#### --ftp-shut-timeout + Maximum time to wait for data connection closing status. -.PP + Properties: -.IP \[bu] 2 -Config: shut_timeout -.IP \[bu] 2 -Env Var: RCLONE_FTP_SHUT_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --ftp-ask-password -.PP + +- Config: shut_timeout +- Env Var: RCLONE_FTP_SHUT_TIMEOUT +- Type: Duration +- Default: 1m0s + +#### --ftp-ask-password + Allow asking for FTP password when needed. -.PP -If this is set and no password is supplied then rclone will ask for a -password -.PP + +If this is set and no password is supplied then rclone will ask for a password + + Properties: -.IP \[bu] 2 -Config: ask_password -.IP \[bu] 2 -Env Var: RCLONE_FTP_ASK_PASSWORD -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --ftp-encoding -.PP + +- Config: ask_password +- Env Var: RCLONE_FTP_ASK_PASSWORD +- Type: bool +- Default: false + +#### --ftp-socks-proxy + +Socks 5 proxy host. + + Supports the format user:pass\[at]host:port, user\[at]host:port, host:port. + + Example: + + myUser:myPass\[at]localhost:9005 + + +Properties: + +- Config: socks_proxy +- Env Var: RCLONE_FTP_SOCKS_PROXY +- Type: string +- Required: false + +#### --ftp-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_FTP_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Del,Ctl,RightSpace,Dot -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]Asterisk,Ctl,Dot,Slash\[dq] -.RS 2 -.IP \[bu] 2 -ProFTPd can\[aq]t handle \[aq]*\[aq] in file names -.RE -.IP \[bu] 2 -\[dq]BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket\[dq] -.RS 2 -.IP \[bu] 2 -PureFTPd can\[aq]t handle \[aq][]\[aq] or \[aq]*\[aq] in file names -.RE -.IP \[bu] 2 -\[dq]Ctl,LeftPeriod,Slash\[dq] -.RS 2 -.IP \[bu] 2 -VsFTPd can\[aq]t handle file names starting with dot -.RE -.RE -.SS Limitations -.PP -FTP servers acting as rclone remotes must support \f[C]passive\f[R] -mode. -The mode cannot be configured as \f[C]passive\f[R] is the only supported -one. -Rclone\[aq]s FTP implementation is not compatible with \f[C]active\f[R] -mode as the library it uses doesn\[aq]t support -it (https://github.com/jlaffaye/ftp/issues/29). + +- Config: encoding +- Env Var: RCLONE_FTP_ENCODING +- Type: MultiEncoder +- Default: Slash,Del,Ctl,RightSpace,Dot +- Examples: + - \[dq]Asterisk,Ctl,Dot,Slash\[dq] + - ProFTPd can\[aq]t handle \[aq]*\[aq] in file names + - \[dq]BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket\[dq] + - PureFTPd can\[aq]t handle \[aq][]\[aq] or \[aq]*\[aq] in file names + - \[dq]Ctl,LeftPeriod,Slash\[dq] + - VsFTPd can\[aq]t handle file names starting with dot + + + +## Limitations + +FTP servers acting as rclone remotes must support \[ga]passive\[ga] mode. +The mode cannot be configured as \[ga]passive\[ga] is the only supported one. +Rclone\[aq]s FTP implementation is not compatible with \[ga]active\[ga] mode +as [the library it uses doesn\[aq]t support it](https://github.com/jlaffaye/ftp/issues/29). This will likely never be supported due to security concerns. -.PP + Rclone\[aq]s FTP backend does not support any checksums but can compare file sizes. -.PP -\f[C]rclone about\f[R] is not supported by the FTP backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.PP -The implementation of : \f[C]--dump headers\f[R], -\f[C]--dump bodies\f[R], \f[C]--dump auth\f[R] for debugging isn\[aq]t -the same as for rclone HTTP based backends - it has less fine grained -control. -.PP -\f[C]--timeout\f[R] isn\[aq]t supported (but \f[C]--contimeout\f[R] is). -.PP -\f[C]--bind\f[R] isn\[aq]t supported. -.PP -Rclone\[aq]s FTP backend could support server-side move but does not at -present. -.PP -The \f[C]ftp_proxy\f[R] environment variable is not currently supported. -.SS Modified time -.PP + +\[ga]rclone about\[ga] is not supported by the FTP backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +The implementation of : \[ga]--dump headers\[ga], +\[ga]--dump bodies\[ga], \[ga]--dump auth\[ga] for debugging isn\[aq]t the same as +for rclone HTTP based backends - it has less fine grained control. + +\[ga]--timeout\[ga] isn\[aq]t supported (but \[ga]--contimeout\[ga] is). + +\[ga]--bind\[ga] isn\[aq]t supported. + +Rclone\[aq]s FTP backend could support server-side move but does not +at present. + +The \[ga]ftp_proxy\[ga] environment variable is not currently supported. + +#### Modified time + File modification time (timestamps) is supported to 1 second resolution -for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP -server. -The \f[C]VsFTPd\f[R] server has non-standard implementation of time -related protocol commands and needs a special configuration setting: -\f[C]writing_mdtm = true\f[R]. -.PP -Support for precise file time with other FTP servers varies depending on -what protocol extensions they advertise. -If all the \f[C]MLSD\f[R], \f[C]MDTM\f[R] and \f[C]MFTM\f[R] extensions -are present, rclone will use them together to provide precise time. -Otherwise the times you see on the FTP server through rclone are those -of the last file upload. -.PP -You can use the following command to check whether rclone can use -precise time with your FTP server: -\f[C]rclone backend features your_ftp_remote:\f[R] (the trailing colon -is important). -Look for the number in the line tagged by \f[C]Precision\f[R] -designating the remote time precision expressed as nanoseconds. -A value of \f[C]1000000000\f[R] means that file time precision of 1 -second is available. -A value of \f[C]3153600000000000000\f[R] (or another large number) means -\[dq]unsupported\[dq]. -.SH Google Cloud Storage -.PP -Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for -the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:bucket/path/to/dir\f[R]. -.SS Configuration -.PP -The initial setup for google cloud storage involves getting a token from -Google Cloud Storage which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server. +The \[ga]VsFTPd\[ga] server has non-standard implementation of time related protocol +commands and needs a special configuration setting: \[ga]writing_mdtm = true\[ga]. + +Support for precise file time with other FTP servers varies depending on what +protocol extensions they advertise. If all the \[ga]MLSD\[ga], \[ga]MDTM\[ga] and \[ga]MFTM\[ga] +extensions are present, rclone will use them together to provide precise time. +Otherwise the times you see on the FTP server through rclone are those of the +last file upload. + +You can use the following command to check whether rclone can use precise time +with your FTP server: \[ga]rclone backend features your_ftp_remote:\[ga] (the trailing +colon is important). Look for the number in the line tagged by \[ga]Precision\[ga] +designating the remote time precision expressed as nanoseconds. A value of +\[ga]1000000000\[ga] means that file time precision of 1 second is available. +A value of \[ga]3153600000000000000\[ga] (or another large number) means \[dq]unsupported\[dq]. + +# Google Cloud Storage + +Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. + +## Configuration + +The initial setup for google cloud storage involves getting a token from Google Cloud Storage +which you need to do in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Google Cloud Storage (this is not Google Drive) - \[rs] \[dq]google cloud storage\[dq] -[snip] -Storage> google cloud storage -Google Application Client Id - leave blank normally. -client_id> -Google Application Client Secret - leave blank normally. -client_secret> -Project number optional - needed only for list/create/delete buckets - see your developer console. -project_number> 12345678 -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. -service_account_file> -Access Control List for new objects. -Choose a number from below, or type in your own value - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. - \[rs] \[dq]authenticatedRead\[dq] - 2 / Object owner gets OWNER access, and project team owners get OWNER access. - \[rs] \[dq]bucketOwnerFullControl\[dq] - 3 / Object owner gets OWNER access, and project team owners get READER access. - \[rs] \[dq]bucketOwnerRead\[dq] - 4 / Object owner gets OWNER access [default if left blank]. - \[rs] \[dq]private\[dq] - 5 / Object owner gets OWNER access, and project team members get access according to their roles. - \[rs] \[dq]projectPrivate\[dq] - 6 / Object owner gets OWNER access, and all Users get READER access. - \[rs] \[dq]publicRead\[dq] -object_acl> 4 -Access Control List for new buckets. -Choose a number from below, or type in your own value - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. - \[rs] \[dq]authenticatedRead\[dq] - 2 / Project team owners get OWNER access [default if left blank]. - \[rs] \[dq]private\[dq] - 3 / Project team members get access according to their roles. - \[rs] \[dq]projectPrivate\[dq] - 4 / Project team owners get OWNER access, and all Users get READER access. - \[rs] \[dq]publicRead\[dq] - 5 / Project team owners get OWNER access, and all Users get WRITER access. - \[rs] \[dq]publicReadWrite\[dq] -bucket_acl> 2 -Location for the newly created buckets. -Choose a number from below, or type in your own value - 1 / Empty for default location (US). - \[rs] \[dq]\[dq] - 2 / Multi-regional location for Asia. - \[rs] \[dq]asia\[dq] - 3 / Multi-regional location for Europe. - \[rs] \[dq]eu\[dq] - 4 / Multi-regional location for United States. - \[rs] \[dq]us\[dq] - 5 / Taiwan. - \[rs] \[dq]asia-east1\[dq] - 6 / Tokyo. - \[rs] \[dq]asia-northeast1\[dq] - 7 / Singapore. - \[rs] \[dq]asia-southeast1\[dq] - 8 / Sydney. - \[rs] \[dq]australia-southeast1\[dq] - 9 / Belgium. - \[rs] \[dq]europe-west1\[dq] -10 / London. - \[rs] \[dq]europe-west2\[dq] -11 / Iowa. - \[rs] \[dq]us-central1\[dq] -12 / South Carolina. - \[rs] \[dq]us-east1\[dq] -13 / Northern Virginia. - \[rs] \[dq]us-east4\[dq] -14 / Oregon. - \[rs] \[dq]us-west1\[dq] -location> 12 -The storage class to use when storing objects in Google Cloud Storage. -Choose a number from below, or type in your own value - 1 / Default - \[rs] \[dq]\[dq] - 2 / Multi-regional storage class - \[rs] \[dq]MULTI_REGIONAL\[dq] - 3 / Regional storage class - \[rs] \[dq]REGIONAL\[dq] - 4 / Nearline storage class - \[rs] \[dq]NEARLINE\[dq] - 5 / Coldline storage class - \[rs] \[dq]COLDLINE\[dq] - 6 / Durable reduced availability storage class - \[rs] \[dq]DURABLE_REDUCED_AVAILABILITY\[dq] -storage_class> 5 -Remote config +\f[R] +.fi +.IP "n)" 3 +New remote +.IP "o)" 3 +Delete remote +.IP "p)" 3 +Quit config e/n/d/q> n name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Google +Cloud Storage (this is not Google Drive) \ \[dq]google cloud +storage\[dq] [snip] Storage> google cloud storage Google Application +Client Id - leave blank normally. +client_id> Google Application Client Secret - leave blank normally. +client_secret> Project number optional - needed only for +list/create/delete buckets - see your developer console. +project_number> 12345678 Service Account Credentials JSON file path - +needed only if you want use SA instead of interactive login. +service_account_file> Access Control List for new objects. +Choose a number from below, or type in your own value 1 / Object owner +gets OWNER access, and all Authenticated Users get READER access. +\ \[dq]authenticatedRead\[dq] 2 / Object owner gets OWNER access, and +project team owners get OWNER access. +\ \[dq]bucketOwnerFullControl\[dq] 3 / Object owner gets OWNER access, +and project team owners get READER access. +\ \[dq]bucketOwnerRead\[dq] 4 / Object owner gets OWNER access [default +if left blank]. +\ \[dq]private\[dq] 5 / Object owner gets OWNER access, and project team +members get access according to their roles. +\ \[dq]projectPrivate\[dq] 6 / Object owner gets OWNER access, and all +Users get READER access. +\ \[dq]publicRead\[dq] object_acl> 4 Access Control List for new +buckets. +Choose a number from below, or type in your own value 1 / Project team +owners get OWNER access, and all Authenticated Users get READER access. +\ \[dq]authenticatedRead\[dq] 2 / Project team owners get OWNER access +[default if left blank]. +\ \[dq]private\[dq] 3 / Project team members get access according to +their roles. +\ \[dq]projectPrivate\[dq] 4 / Project team owners get OWNER access, and +all Users get READER access. +\ \[dq]publicRead\[dq] 5 / Project team owners get OWNER access, and all +Users get WRITER access. +\ \[dq]publicReadWrite\[dq] bucket_acl> 2 Location for the newly created +buckets. +Choose a number from below, or type in your own value 1 / Empty for +default location (US). +\ \[dq]\[dq] 2 / Multi-regional location for Asia. +\ \[dq]asia\[dq] 3 / Multi-regional location for Europe. +\ \[dq]eu\[dq] 4 / Multi-regional location for United States. +\ \[dq]us\[dq] 5 / Taiwan. +\ \[dq]asia-east1\[dq] 6 / Tokyo. +\ \[dq]asia-northeast1\[dq] 7 / Singapore. +\ \[dq]asia-southeast1\[dq] 8 / Sydney. +\ \[dq]australia-southeast1\[dq] 9 / Belgium. +\ \[dq]europe-west1\[dq] 10 / London. +\ \[dq]europe-west2\[dq] 11 / Iowa. +\ \[dq]us-central1\[dq] 12 / South Carolina. +\ \[dq]us-east1\[dq] 13 / Northern Virginia. +\ \[dq]us-east4\[dq] 14 / Oregon. +\ \[dq]us-west1\[dq] location> 12 The storage class to use when storing +objects in Google Cloud Storage. +Choose a number from below, or type in your own value 1 / Default +\ \[dq]\[dq] 2 / Multi-regional storage class \ \[dq]MULTI_REGIONAL\[dq] +3 / Regional storage class \ \[dq]REGIONAL\[dq] 4 / Nearline storage +class \ \[dq]NEARLINE\[dq] 5 / Coldline storage class +\ \[dq]COLDLINE\[dq] 6 / Durable reduced availability storage class +\ \[dq]DURABLE_REDUCED_AVAILABILITY\[dq] storage_class> 5 Remote config Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code +.IP \[bu] 2 +Say Y if the machine running rclone has a web browser you can use +.IP \[bu] 2 +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +.IP "y)" 3 +Yes +.IP "z)" 3 +No y/n> y If your browser doesn\[aq]t open automatically go to the +following link: http://127.0.0.1:53682/auth Log in and authorize rclone +for access Waiting for code... +Got code -------------------- [remote] type = google cloud storage +client_id = client_secret = token = +{\[dq]AccessToken\[dq]:\[dq]xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]RefreshToken\[dq]:\[dq]x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx\[dq],\[dq]Expiry\[dq]:\[dq]2014-07-17T20:49:14.929208288+01:00\[dq],\[dq]Extra\[dq]:null} +project_number = 12345678 object_acl = private bucket_acl = private -------------------- -[remote] -type = google cloud storage -client_id = -client_secret = -token = {\[dq]AccessToken\[dq]:\[dq]xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]RefreshToken\[dq]:\[dq]x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx\[dq],\[dq]Expiry\[dq]:\[dq]2014-07-17T20:49:14.929208288+01:00\[dq],\[dq]Extra\[dq]:null} -project_number = 12345678 -object_acl = private -bucket_acl = private --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP +.IP "a)" 3 +Yes this is OK +.IP "b)" 3 +Edit this remote +.IP "c)" 3 +Delete this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically -authenticate. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall, or use -manual mode. -.PP -This remote is called \f[C]remote\f[R] and can now be used like this -.PP +token as returned from Google if using web browser to automatically +authenticate. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and this +it may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +This remote is called \[ga]remote\[ga] and can now be used like this + See all the buckets in your project -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + Make a new bucket -.IP -.nf -\f[C] -rclone mkdir remote:bucket -\f[R] -.fi -.PP + + rclone mkdir remote:bucket + List the contents of a bucket -.IP -.nf -\f[C] -rclone ls remote:bucket -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote bucket, deleting any -excess files in the bucket. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:bucket -\f[R] -.fi -.SS Service Account support -.PP + + rclone ls remote:bucket + +Sync \[ga]/home/local/directory\[ga] to the remote bucket, deleting any excess +files in the bucket. + + rclone sync --interactive /home/local/directory remote:bucket + +### Service Account support + You can set up rclone with Google Cloud Storage in an unattended mode, -i.e. -not tied to a specific end-user Google account. -This is useful when you want to synchronise files onto machines that -don\[aq]t have actively logged-in users, for example build machines. -.PP -To get credentials for Google Cloud Platform IAM Service -Accounts (https://cloud.google.com/iam/docs/service-accounts), please -head to the Service -Account (https://console.cloud.google.com/permissions/serviceaccounts) -section of the Google Developer Console. -Service Accounts behave just like normal \f[C]User\f[R] permissions in -Google Cloud Storage -ACLs (https://cloud.google.com/storage/docs/access-control), so you can -limit their access (e.g. -make them read only). -After creating an account, a JSON file containing the Service -Account\[aq]s credentials will be downloaded onto your machines. -These credentials are what rclone will use for authentication. -.PP -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the \f[C]service_account_file\f[R] -prompt and rclone won\[aq]t use the browser based authentication flow. -If you\[aq]d rather stuff the contents of the credentials file into the -rclone config file, you can set \f[C]service_account_credentials\f[R] -with the actual contents of the file instead, or set the equivalent +i.e. not tied to a specific end-user Google account. This is useful +when you want to synchronise files onto machines that don\[aq]t have +actively logged-in users, for example build machines. + +To get credentials for Google Cloud Platform +[IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), +please head to the +[Service Account](https://console.cloud.google.com/permissions/serviceaccounts) +section of the Google Developer Console. Service Accounts behave just +like normal \[ga]User\[ga] permissions in +[Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), +so you can limit their access (e.g. make them read only). After +creating an account, a JSON file containing the Service Account\[aq]s +credentials will be downloaded onto your machines. These credentials +are what rclone will use for authentication. + +To use a Service Account instead of OAuth2 token flow, enter the path +to your Service Account credentials at the \[ga]service_account_file\[ga] +prompt and rclone won\[aq]t use the browser based authentication +flow. If you\[aq]d rather stuff the contents of the credentials file into +the rclone config file, you can set \[ga]service_account_credentials\[ga] with +the actual contents of the file instead, or set the equivalent environment variable. -.SS Anonymous Access -.PP -For downloads of objects that permit public access you can configure -rclone to use anonymous access by setting \f[C]anonymous\f[R] to -\f[C]true\f[R]. -With unauthorized access you can\[aq]t write or create files but only -read or list those buckets and objects that have public read access. -.SS Application Default Credentials -.PP -If no other source of credentials is provided, rclone will fall back to -Application Default -Credentials (https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) -this is useful both when you already have configured authentication for -your developer account, or in production when running on a google -compute host. -Note that if running in docker, you may need to run additional commands -on your google compute machine - see this -page (https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). -.PP -Note that in the case application default credentials are used, there is -no need to explicitly configure a project number. -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.SS Custom upload headers -.PP -You can set custom upload headers with the \f[C]--header-upload\f[R] -flag. -Google Cloud Storage supports the headers as described in the working -with metadata -documentation (https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) -.IP \[bu] 2 -Cache-Control -.IP \[bu] 2 -Content-Disposition -.IP \[bu] 2 -Content-Encoding -.IP \[bu] 2 -Content-Language -.IP \[bu] 2 -Content-Type -.IP \[bu] 2 -X-Goog-Storage-Class -.IP \[bu] 2 -X-Goog-Meta- -.PP -Eg \f[C]--header-upload \[dq]Content-Type text/potato\[dq]\f[R] -.PP + +### Anonymous Access + +For downloads of objects that permit public access you can configure rclone +to use anonymous access by setting \[ga]anonymous\[ga] to \[ga]true\[ga]. +With unauthorized access you can\[aq]t write or create files but only read or list +those buckets and objects that have public read access. + +### Application Default Credentials + +If no other source of credentials is provided, rclone will fall back +to +[Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) +this is useful both when you already have configured authentication +for your developer account, or in production when running on a google +compute host. Note that if running in docker, you may need to run +additional commands on your google compute machine - +[see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). + +Note that in the case application default credentials are used, there +is no need to explicitly configure a project number. + +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +### Custom upload headers + +You can set custom upload headers with the \[ga]--header-upload\[ga] +flag. Google Cloud Storage supports the headers as described in the +[working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata) + +- Cache-Control +- Content-Disposition +- Content-Encoding +- Content-Language +- Content-Type +- X-Goog-Storage-Class +- X-Goog-Meta- + +Eg \[ga]--header-upload \[dq]Content-Type text/potato\[dq]\[ga] + Note that the last of these is for setting custom metadata in the form -\f[C]--header-upload \[dq]x-goog-meta-key: value\[dq]\f[R] -.SS Modification time -.PP +\[ga]--header-upload \[dq]x-goog-meta-key: value\[dq]\[ga] + +### Modification time + Google Cloud Storage stores md5sum natively. -Google\[aq]s gsutil (https://cloud.google.com/storage/docs/gsutil) tool -stores modification time with one-second precision as -\f[C]goog-reserved-file-mtime\f[R] in file metadata. -.PP -To ensure compatibility with gsutil, rclone stores modification time in -2 separate metadata entries. -\f[C]mtime\f[R] uses RFC3339 format with one-nanosecond precision. -\f[C]goog-reserved-file-mtime\f[R] uses Unix timestamp format with -one-second precision. -To get modification time from object metadata, rclone reads the metadata -in the following order: \f[C]mtime\f[R], -\f[C]goog-reserved-file-mtime\f[R], object updated time. -.PP +Google\[aq]s [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time +with one-second precision as \[ga]goog-reserved-file-mtime\[ga] in file metadata. + +To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. +\[ga]mtime\[ga] uses RFC3339 format with one-nanosecond precision. +\[ga]goog-reserved-file-mtime\[ga] uses Unix timestamp format with one-second precision. +To get modification time from object metadata, rclone reads the metadata in the following order: \[ga]mtime\[ga], \[ga]goog-reserved-file-mtime\[ga], object updated time. + Note that rclone\[aq]s default modify window is 1ns. -Files uploaded by gsutil only contain timestamps with one-second -precision. -If you use rclone to sync files previously uploaded by gsutil, rclone -will attempt to update modification time for all these files. -To avoid these possibly unnecessary updates, use -\f[C]--modify-window 1s\f[R]. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -LF -T}@T{ -0x0A -T}@T{ -\[u240A] -T} -T{ -CR -T}@T{ -0x0D -T}@T{ -\[u240D] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP -Here are the Standard options specific to google cloud storage (Google -Cloud Storage (this is not Google Drive)). -.SS --gcs-client-id -.PP +Files uploaded by gsutil only contain timestamps with one-second precision. +If you use rclone to sync files previously uploaded by gsutil, +rclone will attempt to update modification time for all these files. +To avoid these possibly unnecessary updates, use \[ga]--modify-window 1s\[ga]. + +### Restricted filename characters + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| NUL | 0x00 | \[u2400] | +| LF | 0x0A | \[u240A] | +| CR | 0x0D | \[u240D] | +| / | 0x2F | \[uFF0F] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). + +#### --gcs-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_GCS_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_GCS_CLIENT_ID +- Type: string +- Required: false + +#### --gcs-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_GCS_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-project-number -.PP + +- Config: client_secret +- Env Var: RCLONE_GCS_CLIENT_SECRET +- Type: string +- Required: false + +#### --gcs-project-number + Project number. -.PP -Optional - needed only for list/create/delete buckets - see your -developer console. -.PP + +Optional - needed only for list/create/delete buckets - see your developer console. + Properties: -.IP \[bu] 2 -Config: project_number -.IP \[bu] 2 -Env Var: RCLONE_GCS_PROJECT_NUMBER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-user-project -.PP + +- Config: project_number +- Env Var: RCLONE_GCS_PROJECT_NUMBER +- Type: string +- Required: false + +#### --gcs-user-project + User project. -.PP + Optional - needed only for requester pays. -.PP + Properties: -.IP \[bu] 2 -Config: user_project -.IP \[bu] 2 -Env Var: RCLONE_GCS_USER_PROJECT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-service-account-file -.PP + +- Config: user_project +- Env Var: RCLONE_GCS_USER_PROJECT +- Type: string +- Required: false + +#### --gcs-service-account-file + Service Account Credentials JSON file path. -.PP + Leave blank normally. Needed only if you want use SA instead of interactive login. -.PP -Leading \f[C]\[ti]\f[R] will be expanded in the file name as will -environment variables such as \f[C]${RCLONE_CONFIG_DIR}\f[R]. -.PP + +Leading \[ga]\[ti]\[ga] will be expanded in the file name as will environment variables such as \[ga]${RCLONE_CONFIG_DIR}\[ga]. + Properties: -.IP \[bu] 2 -Config: service_account_file -.IP \[bu] 2 -Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-service-account-credentials -.PP + +- Config: service_account_file +- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE +- Type: string +- Required: false + +#### --gcs-service-account-credentials + Service Account Credentials JSON blob. -.PP + Leave blank normally. Needed only if you want use SA instead of interactive login. -.PP + Properties: -.IP \[bu] 2 -Config: service_account_credentials -.IP \[bu] 2 -Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-anonymous -.PP + +- Config: service_account_credentials +- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS +- Type: string +- Required: false + +#### --gcs-anonymous + Access public buckets and objects without credentials. -.PP -Set to \[aq]true\[aq] if you just want to download files and don\[aq]t -configure credentials. -.PP + +Set to \[aq]true\[aq] if you just want to download files and don\[aq]t configure credentials. + Properties: -.IP \[bu] 2 -Config: anonymous -.IP \[bu] 2 -Env Var: RCLONE_GCS_ANONYMOUS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gcs-object-acl -.PP + +- Config: anonymous +- Env Var: RCLONE_GCS_ANONYMOUS +- Type: bool +- Default: false + +#### --gcs-object-acl + Access Control List for new objects. -.PP + Properties: -.IP \[bu] 2 -Config: object_acl -.IP \[bu] 2 -Env Var: RCLONE_GCS_OBJECT_ACL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]authenticatedRead\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -All Authenticated Users get READER access. -.RE -.IP \[bu] 2 -\[dq]bucketOwnerFullControl\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -Project team owners get OWNER access. -.RE -.IP \[bu] 2 -\[dq]bucketOwnerRead\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -Project team owners get READER access. -.RE -.IP \[bu] 2 -\[dq]private\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -Default if left blank. -.RE -.IP \[bu] 2 -\[dq]projectPrivate\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -Project team members get access according to their roles. -.RE -.IP \[bu] 2 -\[dq]publicRead\[dq] -.RS 2 -.IP \[bu] 2 -Object owner gets OWNER access. -.IP \[bu] 2 -All Users get READER access. -.RE -.RE -.SS --gcs-bucket-acl -.PP + +- Config: object_acl +- Env Var: RCLONE_GCS_OBJECT_ACL +- Type: string +- Required: false +- Examples: + - \[dq]authenticatedRead\[dq] + - Object owner gets OWNER access. + - All Authenticated Users get READER access. + - \[dq]bucketOwnerFullControl\[dq] + - Object owner gets OWNER access. + - Project team owners get OWNER access. + - \[dq]bucketOwnerRead\[dq] + - Object owner gets OWNER access. + - Project team owners get READER access. + - \[dq]private\[dq] + - Object owner gets OWNER access. + - Default if left blank. + - \[dq]projectPrivate\[dq] + - Object owner gets OWNER access. + - Project team members get access according to their roles. + - \[dq]publicRead\[dq] + - Object owner gets OWNER access. + - All Users get READER access. + +#### --gcs-bucket-acl + Access Control List for new buckets. -.PP + Properties: -.IP \[bu] 2 -Config: bucket_acl -.IP \[bu] 2 -Env Var: RCLONE_GCS_BUCKET_ACL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]authenticatedRead\[dq] -.RS 2 -.IP \[bu] 2 -Project team owners get OWNER access. -.IP \[bu] 2 -All Authenticated Users get READER access. -.RE -.IP \[bu] 2 -\[dq]private\[dq] -.RS 2 -.IP \[bu] 2 -Project team owners get OWNER access. -.IP \[bu] 2 -Default if left blank. -.RE -.IP \[bu] 2 -\[dq]projectPrivate\[dq] -.RS 2 -.IP \[bu] 2 -Project team members get access according to their roles. -.RE -.IP \[bu] 2 -\[dq]publicRead\[dq] -.RS 2 -.IP \[bu] 2 -Project team owners get OWNER access. -.IP \[bu] 2 -All Users get READER access. -.RE -.IP \[bu] 2 -\[dq]publicReadWrite\[dq] -.RS 2 -.IP \[bu] 2 -Project team owners get OWNER access. -.IP \[bu] 2 -All Users get WRITER access. -.RE -.RE -.SS --gcs-bucket-policy-only -.PP + +- Config: bucket_acl +- Env Var: RCLONE_GCS_BUCKET_ACL +- Type: string +- Required: false +- Examples: + - \[dq]authenticatedRead\[dq] + - Project team owners get OWNER access. + - All Authenticated Users get READER access. + - \[dq]private\[dq] + - Project team owners get OWNER access. + - Default if left blank. + - \[dq]projectPrivate\[dq] + - Project team members get access according to their roles. + - \[dq]publicRead\[dq] + - Project team owners get OWNER access. + - All Users get READER access. + - \[dq]publicReadWrite\[dq] + - Project team owners get OWNER access. + - All Users get WRITER access. + +#### --gcs-bucket-policy-only + Access checks should use bucket-level IAM policies. -.PP + If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. -.PP + When it is set, rclone: -.IP \[bu] 2 -ignores ACLs set on buckets -.IP \[bu] 2 -ignores ACLs set on objects -.IP \[bu] 2 -creates buckets with Bucket Policy Only set -.PP + +- ignores ACLs set on buckets +- ignores ACLs set on objects +- creates buckets with Bucket Policy Only set + Docs: https://cloud.google.com/storage/docs/bucket-policy-only -.PP + + Properties: -.IP \[bu] 2 -Config: bucket_policy_only -.IP \[bu] 2 -Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gcs-location -.PP + +- Config: bucket_policy_only +- Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY +- Type: bool +- Default: false + +#### --gcs-location + Location for the newly created buckets. -.PP + Properties: -.IP \[bu] 2 -Config: location -.IP \[bu] 2 -Env Var: RCLONE_GCS_LOCATION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -Empty for default location (US) -.RE -.IP \[bu] 2 -\[dq]asia\[dq] -.RS 2 -.IP \[bu] 2 -Multi-regional location for Asia -.RE -.IP \[bu] 2 -\[dq]eu\[dq] -.RS 2 -.IP \[bu] 2 -Multi-regional location for Europe -.RE -.IP \[bu] 2 -\[dq]us\[dq] -.RS 2 -.IP \[bu] 2 -Multi-regional location for United States -.RE -.IP \[bu] 2 -\[dq]asia-east1\[dq] -.RS 2 -.IP \[bu] 2 -Taiwan -.RE -.IP \[bu] 2 -\[dq]asia-east2\[dq] -.RS 2 -.IP \[bu] 2 -Hong Kong -.RE -.IP \[bu] 2 -\[dq]asia-northeast1\[dq] -.RS 2 -.IP \[bu] 2 -Tokyo -.RE -.IP \[bu] 2 -\[dq]asia-northeast2\[dq] -.RS 2 -.IP \[bu] 2 -Osaka -.RE -.IP \[bu] 2 -\[dq]asia-northeast3\[dq] -.RS 2 -.IP \[bu] 2 -Seoul -.RE -.IP \[bu] 2 -\[dq]asia-south1\[dq] -.RS 2 -.IP \[bu] 2 -Mumbai -.RE -.IP \[bu] 2 -\[dq]asia-south2\[dq] -.RS 2 -.IP \[bu] 2 -Delhi -.RE -.IP \[bu] 2 -\[dq]asia-southeast1\[dq] -.RS 2 -.IP \[bu] 2 -Singapore -.RE -.IP \[bu] 2 -\[dq]asia-southeast2\[dq] -.RS 2 -.IP \[bu] 2 -Jakarta -.RE -.IP \[bu] 2 -\[dq]australia-southeast1\[dq] -.RS 2 -.IP \[bu] 2 -Sydney -.RE -.IP \[bu] 2 -\[dq]australia-southeast2\[dq] -.RS 2 -.IP \[bu] 2 -Melbourne -.RE -.IP \[bu] 2 -\[dq]europe-north1\[dq] -.RS 2 -.IP \[bu] 2 -Finland -.RE -.IP \[bu] 2 -\[dq]europe-west1\[dq] -.RS 2 -.IP \[bu] 2 -Belgium -.RE -.IP \[bu] 2 -\[dq]europe-west2\[dq] -.RS 2 -.IP \[bu] 2 -London -.RE -.IP \[bu] 2 -\[dq]europe-west3\[dq] -.RS 2 -.IP \[bu] 2 -Frankfurt -.RE -.IP \[bu] 2 -\[dq]europe-west4\[dq] -.RS 2 -.IP \[bu] 2 -Netherlands -.RE -.IP \[bu] 2 -\[dq]europe-west6\[dq] -.RS 2 -.IP \[bu] 2 -Z\[:u]rich -.RE -.IP \[bu] 2 -\[dq]europe-central2\[dq] -.RS 2 -.IP \[bu] 2 -Warsaw -.RE -.IP \[bu] 2 -\[dq]us-central1\[dq] -.RS 2 -.IP \[bu] 2 -Iowa -.RE -.IP \[bu] 2 -\[dq]us-east1\[dq] -.RS 2 -.IP \[bu] 2 -South Carolina -.RE -.IP \[bu] 2 -\[dq]us-east4\[dq] -.RS 2 -.IP \[bu] 2 -Northern Virginia -.RE -.IP \[bu] 2 -\[dq]us-west1\[dq] -.RS 2 -.IP \[bu] 2 -Oregon -.RE -.IP \[bu] 2 -\[dq]us-west2\[dq] -.RS 2 -.IP \[bu] 2 -California -.RE -.IP \[bu] 2 -\[dq]us-west3\[dq] -.RS 2 -.IP \[bu] 2 -Salt Lake City -.RE -.IP \[bu] 2 -\[dq]us-west4\[dq] -.RS 2 -.IP \[bu] 2 -Las Vegas -.RE -.IP \[bu] 2 -\[dq]northamerica-northeast1\[dq] -.RS 2 -.IP \[bu] 2 -Montr\['e]al -.RE -.IP \[bu] 2 -\[dq]northamerica-northeast2\[dq] -.RS 2 -.IP \[bu] 2 -Toronto -.RE -.IP \[bu] 2 -\[dq]southamerica-east1\[dq] -.RS 2 -.IP \[bu] 2 -S\[~a]o Paulo -.RE -.IP \[bu] 2 -\[dq]southamerica-west1\[dq] -.RS 2 -.IP \[bu] 2 -Santiago -.RE -.IP \[bu] 2 -\[dq]asia1\[dq] -.RS 2 -.IP \[bu] 2 -Dual region: asia-northeast1 and asia-northeast2. -.RE -.IP \[bu] 2 -\[dq]eur4\[dq] -.RS 2 -.IP \[bu] 2 -Dual region: europe-north1 and europe-west4. -.RE -.IP \[bu] 2 -\[dq]nam4\[dq] -.RS 2 -.IP \[bu] 2 -Dual region: us-central1 and us-east1. -.RE -.RE -.SS --gcs-storage-class -.PP + +- Config: location +- Env Var: RCLONE_GCS_LOCATION +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - Empty for default location (US) + - \[dq]asia\[dq] + - Multi-regional location for Asia + - \[dq]eu\[dq] + - Multi-regional location for Europe + - \[dq]us\[dq] + - Multi-regional location for United States + - \[dq]asia-east1\[dq] + - Taiwan + - \[dq]asia-east2\[dq] + - Hong Kong + - \[dq]asia-northeast1\[dq] + - Tokyo + - \[dq]asia-northeast2\[dq] + - Osaka + - \[dq]asia-northeast3\[dq] + - Seoul + - \[dq]asia-south1\[dq] + - Mumbai + - \[dq]asia-south2\[dq] + - Delhi + - \[dq]asia-southeast1\[dq] + - Singapore + - \[dq]asia-southeast2\[dq] + - Jakarta + - \[dq]australia-southeast1\[dq] + - Sydney + - \[dq]australia-southeast2\[dq] + - Melbourne + - \[dq]europe-north1\[dq] + - Finland + - \[dq]europe-west1\[dq] + - Belgium + - \[dq]europe-west2\[dq] + - London + - \[dq]europe-west3\[dq] + - Frankfurt + - \[dq]europe-west4\[dq] + - Netherlands + - \[dq]europe-west6\[dq] + - Z\[:u]rich + - \[dq]europe-central2\[dq] + - Warsaw + - \[dq]us-central1\[dq] + - Iowa + - \[dq]us-east1\[dq] + - South Carolina + - \[dq]us-east4\[dq] + - Northern Virginia + - \[dq]us-west1\[dq] + - Oregon + - \[dq]us-west2\[dq] + - California + - \[dq]us-west3\[dq] + - Salt Lake City + - \[dq]us-west4\[dq] + - Las Vegas + - \[dq]northamerica-northeast1\[dq] + - Montr\['e]al + - \[dq]northamerica-northeast2\[dq] + - Toronto + - \[dq]southamerica-east1\[dq] + - S\[~a]o Paulo + - \[dq]southamerica-west1\[dq] + - Santiago + - \[dq]asia1\[dq] + - Dual region: asia-northeast1 and asia-northeast2. + - \[dq]eur4\[dq] + - Dual region: europe-north1 and europe-west4. + - \[dq]nam4\[dq] + - Dual region: us-central1 and us-east1. + +#### --gcs-storage-class + The storage class to use when storing objects in Google Cloud Storage. -.PP + Properties: -.IP \[bu] 2 -Config: storage_class -.IP \[bu] 2 -Env Var: RCLONE_GCS_STORAGE_CLASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -Default -.RE -.IP \[bu] 2 -\[dq]MULTI_REGIONAL\[dq] -.RS 2 -.IP \[bu] 2 -Multi-regional storage class -.RE -.IP \[bu] 2 -\[dq]REGIONAL\[dq] -.RS 2 -.IP \[bu] 2 -Regional storage class -.RE -.IP \[bu] 2 -\[dq]NEARLINE\[dq] -.RS 2 -.IP \[bu] 2 -Nearline storage class -.RE -.IP \[bu] 2 -\[dq]COLDLINE\[dq] -.RS 2 -.IP \[bu] 2 -Coldline storage class -.RE -.IP \[bu] 2 -\[dq]ARCHIVE\[dq] -.RS 2 -.IP \[bu] 2 -Archive storage class -.RE -.IP \[bu] 2 -\[dq]DURABLE_REDUCED_AVAILABILITY\[dq] -.RS 2 -.IP \[bu] 2 -Durable reduced availability storage class -.RE -.RE -.SS --gcs-env-auth -.PP -Get GCP IAM credentials from runtime (environment variables or instance -meta data if no env vars). -.PP -Only applies if service_account_file and service_account_credentials is -blank. -.PP + +- Config: storage_class +- Env Var: RCLONE_GCS_STORAGE_CLASS +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - Default + - \[dq]MULTI_REGIONAL\[dq] + - Multi-regional storage class + - \[dq]REGIONAL\[dq] + - Regional storage class + - \[dq]NEARLINE\[dq] + - Nearline storage class + - \[dq]COLDLINE\[dq] + - Coldline storage class + - \[dq]ARCHIVE\[dq] + - Archive storage class + - \[dq]DURABLE_REDUCED_AVAILABILITY\[dq] + - Durable reduced availability storage class + +#### --gcs-env-auth + +Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars). + +Only applies if service_account_file and service_account_credentials is blank. + Properties: -.IP \[bu] 2 -Config: env_auth -.IP \[bu] 2 -Env Var: RCLONE_GCS_ENV_AUTH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Enter credentials in the next step. -.RE -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Get GCP IAM credentials from the environment (env vars or IAM). -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to google cloud storage (Google -Cloud Storage (this is not Google Drive)). -.SS --gcs-token -.PP + +- Config: env_auth +- Env Var: RCLONE_GCS_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - \[dq]false\[dq] + - Enter credentials in the next step. + - \[dq]true\[dq] + - Get GCP IAM credentials from the environment (env vars or IAM). + +### Advanced options + +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). + +#### --gcs-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_GCS_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-auth-url -.PP + +- Config: token +- Env Var: RCLONE_GCS_TOKEN +- Type: string +- Required: false + +#### --gcs-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_GCS_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_GCS_AUTH_URL +- Type: string +- Required: false + +#### --gcs-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_GCS_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-directory-markers -.PP -Upload an empty object with a trailing slash when a new directory is -created -.PP -Empty folders are unsupported for bucket based remotes, this option -creates an empty object ending with \[dq]/\[dq], to persist the folder. -.PP + +- Config: token_url +- Env Var: RCLONE_GCS_TOKEN_URL +- Type: string +- Required: false + +#### --gcs-directory-markers + +Upload an empty object with a trailing slash when a new directory is created + +Empty folders are unsupported for bucket based remotes, this option creates an empty +object ending with \[dq]/\[dq], to persist the folder. + + Properties: -.IP \[bu] 2 -Config: directory_markers -.IP \[bu] 2 -Env Var: RCLONE_GCS_DIRECTORY_MARKERS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gcs-no-check-bucket -.PP + +- Config: directory_markers +- Env Var: RCLONE_GCS_DIRECTORY_MARKERS +- Type: bool +- Default: false + +#### --gcs-no-check-bucket + If set, don\[aq]t attempt to check the bucket exists or create it. -.PP + This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. -.PP + + Properties: -.IP \[bu] 2 -Config: no_check_bucket -.IP \[bu] 2 -Env Var: RCLONE_GCS_NO_CHECK_BUCKET -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gcs-decompress -.PP + +- Config: no_check_bucket +- Env Var: RCLONE_GCS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +#### --gcs-decompress + If set this will decompress gzip encoded objects. -.PP -It is possible to upload objects to GCS with \[dq]Content-Encoding: -gzip\[dq] set. -Normally rclone will download these files as compressed objects. -.PP + +It is possible to upload objects to GCS with \[dq]Content-Encoding: gzip\[dq] +set. Normally rclone will download these files as compressed objects. + If this flag is set then rclone will decompress these files with -\[dq]Content-Encoding: gzip\[dq] as they are received. -This means that rclone can\[aq]t check the size and hash but the file -contents will be decompressed. -.PP +\[dq]Content-Encoding: gzip\[dq] as they are received. This means that rclone +can\[aq]t check the size and hash but the file contents will be decompressed. + + Properties: -.IP \[bu] 2 -Config: decompress -.IP \[bu] 2 -Env Var: RCLONE_GCS_DECOMPRESS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gcs-endpoint -.PP + +- Config: decompress +- Env Var: RCLONE_GCS_DECOMPRESS +- Type: bool +- Default: false + +#### --gcs-endpoint + Endpoint for the service. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_GCS_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gcs-encoding -.PP + +- Config: endpoint +- Env Var: RCLONE_GCS_ENDPOINT +- Type: string +- Required: false + +#### --gcs-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_GCS_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,CrLf,InvalidUtf8,Dot -.SS Limitations -.PP -\f[C]rclone about\f[R] is not supported by the Google Cloud Storage -backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Google Drive -.PP -Paths are specified as \f[C]drive:path\f[R] -.PP -Drive paths may be as deep as required, e.g. -\f[C]drive:directory/subdirectory\f[R]. -.SS Configuration -.PP + +- Config: encoding +- Env Var: RCLONE_GCS_ENCODING +- Type: MultiEncoder +- Default: Slash,CrLf,InvalidUtf8,Dot + + + +## Limitations + +\[ga]rclone about\[ga] is not supported by the Google Cloud Storage backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Google Drive + +Paths are specified as \[ga]drive:path\[ga] + +Drive paths may be as deep as required, e.g. \[ga]drive:directory/subdirectory\[ga]. + +## Configuration + The initial setup for drive involves getting a token from Google drive -which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +which you need to do in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Google Drive - \[rs] \[dq]drive\[dq] -[snip] -Storage> drive -Google Application Client Id - leave blank normally. -client_id> -Google Application Client Secret - leave blank normally. -client_secret> -Scope that rclone should use when requesting access from drive. -Choose a number from below, or type in your own value - 1 / Full access all files, excluding Application Data Folder. - \[rs] \[dq]drive\[dq] - 2 / Read-only access to file metadata and file contents. - \[rs] \[dq]drive.readonly\[dq] - / Access to files created by rclone only. - 3 | These are visible in the drive website. - | File authorization is revoked when the user deauthorizes the app. - \[rs] \[dq]drive.file\[dq] - / Allows read and write access to the Application Data folder. - 4 | This is not visible in the drive website. - \[rs] \[dq]drive.appfolder\[dq] - / Allows read-only access to file metadata but - 5 | does not allow any access to read or download file content. - \[rs] \[dq]drive.metadata.readonly\[dq] -scope> 1 -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. -service_account_file> -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code -Configure this as a Shared Drive (Team Drive)? -y) Yes -n) No -y/n> n --------------------- -[remote] -client_id = -client_secret = -scope = drive -root_folder_id = -service_account_file = -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2014-03-16T13:57:58.955387075Z\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP +n) New remote r) Rename remote c) Copy remote s) Set configuration +password q) Quit config n/r/c/s/q> n name> remote Type of storage to +configure. +Choose a number from below, or type in your own value [snip] XX / Google +Drive \ \[dq]drive\[dq] [snip] Storage> drive Google Application Client +Id - leave blank normally. +client_id> Google Application Client Secret - leave blank normally. +client_secret> Scope that rclone should use when requesting access from +drive. +Choose a number from below, or type in your own value 1 / Full access +all files, excluding Application Data Folder. +\ \[dq]drive\[dq] 2 / Read-only access to file metadata and file +contents. +\ \[dq]drive.readonly\[dq] / Access to files created by rclone only. +3 | These are visible in the drive website. +| File authorization is revoked when the user deauthorizes the app. +\ \[dq]drive.file\[dq] / Allows read and write access to the Application +Data folder. +4 | This is not visible in the drive website. +\ \[dq]drive.appfolder\[dq] / Allows read-only access to file metadata +but 5 | does not allow any access to read or download file content. +\ \[dq]drive.metadata.readonly\[dq] scope> 1 Service Account Credentials +JSON file path - needed only if you want use SA instead of interactive +login. +service_account_file> Remote config Use web browser to automatically +authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... +Got code Configure this as a Shared Drive (Team Drive)? +y) Yes n) No y/n> n -------------------- [remote] client_id = +client_secret = scope = drive root_folder_id = service_account_file = +token = +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2014-03-16T13:57:58.955387075Z\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically -authenticate. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and it may require you to -unblock it temporarily if you are running a host firewall, or use manual -mode. -.PP +token as returned from Google if using web browser to automatically +authenticate. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and it +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + You can then use it like this, -.PP + List directories in top level of your drive -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your drive -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to a drive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Scopes -.PP + + rclone copy /home/source remote:backup + +### Scopes + Rclone allows you to select which scope you would like for rclone to -use. -This changes what type of token is granted to rclone. -The scopes are defined -here (https://developers.google.com/drive/v3/web/about-auth). -.PP +use. This changes what type of token is granted to rclone. [The +scopes are defined +here](https://developers.google.com/drive/v3/web/about-auth). + The scope are -.SS drive -.PP + +#### drive + This is the default scope and allows full access to all files, except for the Application Data Folder (see below). -.PP + Choose this one if you aren\[aq]t sure. -.SS drive.readonly -.PP -This allows read only access to all files. -Files may be listed and downloaded but not uploaded, renamed or deleted. -.SS drive.file -.PP -With this scope rclone can read/view/modify only those files and folders -it creates. -.PP + +#### drive.readonly + +This allows read only access to all files. Files may be listed and +downloaded but not uploaded, renamed or deleted. + +#### drive.file + +With this scope rclone can read/view/modify only those files and +folders it creates. + So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone. -.PP + This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone. -.PP + Files created with this scope are visible in the web interface. -.SS drive.appfolder -.PP -This gives rclone its own private area to store files. -Rclone will not be able to see any other files on your drive and you -won\[aq]t be able to see rclone\[aq]s files from the web interface -either. -.SS drive.metadata.readonly -.PP -This allows read only access to file names only. -It does not allow rclone to download or upload data, or rename or delete -files or directories. -.SS Root folder ID -.PP -This option has been moved to the advanced section. -You can set the \f[C]root_folder_id\f[R] for rclone. -This is the directory (identified by its \f[C]Folder ID\f[R]) that -rclone considers to be the root of your drive. -.PP -Normally you will leave this blank and rclone will determine the correct -root to use itself. -.PP + +#### drive.appfolder + +This gives rclone its own private area to store files. Rclone will +not be able to see any other files on your drive and you won\[aq]t be able +to see rclone\[aq]s files from the web interface either. + +#### drive.metadata.readonly + +This allows read only access to file names only. It does not allow +rclone to download or upload data, or rename or delete files or +directories. + +### Root folder ID + +This option has been moved to the advanced section. You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory +(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root +of your drive. + +Normally you will leave this blank and rclone will determine the +correct root to use itself. + However you can set this to restrict rclone to a specific folder -hierarchy or to access data within the \[dq]Computers\[dq] tab on the -drive web interface (where files from Google\[aq]s Backup and Sync -desktop program go). -.PP -In order to do this you will have to find the \f[C]Folder ID\f[R] of the -directory you wish rclone to display. -This will be the last segment of the URL when you open the relevant -folder in the drive web interface. -.PP +hierarchy or to access data within the \[dq]Computers\[dq] tab on the drive +web interface (where files from Google\[aq]s Backup and Sync desktop +program go). + +In order to do this you will have to find the \[ga]Folder ID\[ga] of the +directory you wish rclone to display. This will be the last segment +of the URL when you open the relevant folder in the drive web +interface. + So if the folder you want rclone to use has a URL which looks like -\f[C]https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\f[R] -in the browser, then you use \f[C]1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\f[R] -as the \f[C]root_folder_id\f[R] in the config. -.PP -\f[B]NB\f[R] folders under the \[dq]Computers\[dq] tab seem to be read -only (drive gives a 500 error) when using rclone. -.PP +\[ga]https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\[ga] +in the browser, then you use \[ga]1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\[ga] as +the \[ga]root_folder_id\[ga] in the config. + +**NB** folders under the \[dq]Computers\[dq] tab seem to be read only (drive +gives a 500 error) when using rclone. + There doesn\[aq]t appear to be an API to discover the folder IDs of the \[dq]Computers\[dq] tab - please contact us if you know otherwise! -.PP -Note also that rclone can\[aq]t access any data under the -\[dq]Backups\[dq] tab on the google drive web interface yet. -.SS Service Account support -.PP -You can set up rclone with Google Drive in an unattended mode, i.e. -not tied to a specific end-user Google account. -This is useful when you want to synchronise files onto machines that -don\[aq]t have actively logged-in users, for example build machines. -.PP -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the \f[C]service_account_file\f[R] -prompt during \f[C]rclone config\f[R] and rclone won\[aq]t use the -browser based authentication flow. -If you\[aq]d rather stuff the contents of the credentials file into the -rclone config file, you can set \f[C]service_account_credentials\f[R] -with the actual contents of the file instead, or set the equivalent -environment variable. -.SS Use case - Google Apps/G-suite account and individual Drive -.PP + +Note also that rclone can\[aq]t access any data under the \[dq]Backups\[dq] tab on +the google drive web interface yet. + +### Service Account support + +You can set up rclone with Google Drive in an unattended mode, +i.e. not tied to a specific end-user Google account. This is useful +when you want to synchronise files onto machines that don\[aq]t have +actively logged-in users, for example build machines. + +To use a Service Account instead of OAuth2 token flow, enter the path +to your Service Account credentials at the \[ga]service_account_file\[ga] +prompt during \[ga]rclone config\[ga] and rclone won\[aq]t use the browser based +authentication flow. If you\[aq]d rather stuff the contents of the +credentials file into the rclone config file, you can set +\[ga]service_account_credentials\[ga] with the actual contents of the file +instead, or set the equivalent environment variable. + +#### Use case - Google Apps/G-suite account and individual Drive + Let\[aq]s say that you are the administrator of a Google Apps (old) or G-suite account. -The goal is to store data on an individual\[aq]s Drive account, who IS a -member of the domain. -We\[aq]ll call the domain \f[B]example.com\f[R], and the user -\f[B]foo\[at]example.com\f[R]. -.PP -There\[aq]s a few steps we need to go through to accomplish this: -.SS 1. Create a service account for example.com -.IP \[bu] 2 -To create a service account and obtain its credentials, go to the Google -Developer Console (https://console.developers.google.com). -.IP \[bu] 2 -You must have a project - create one if you don\[aq]t. -.IP \[bu] 2 -Then go to \[dq]IAM & admin\[dq] -> \[dq]Service Accounts\[dq]. -.IP \[bu] 2 -Use the \[dq]Create Service Account\[dq] button. -Fill in \[dq]Service account name\[dq] and \[dq]Service account ID\[dq] -with something that identifies your client. -.IP \[bu] 2 -Select \[dq]Create And Continue\[dq]. -Step 2 and 3 are optional. -.IP \[bu] 2 -These credentials are what rclone will use for authentication. -If you ever need to remove access, press the \[dq]Delete service account -key\[dq] button. -.SS 2. Allowing API access to example.com Google Drive -.IP \[bu] 2 -Go to example.com\[aq]s admin console -.IP \[bu] 2 -Go into \[dq]Security\[dq] (or use the search bar) -.IP \[bu] 2 -Select \[dq]Show more\[dq] and then \[dq]Advanced settings\[dq] -.IP \[bu] 2 -Select \[dq]Manage API client access\[dq] in the -\[dq]Authentication\[dq] section -.IP \[bu] 2 -In the \[dq]Client Name\[dq] field enter the service account\[aq]s -\[dq]Client ID\[dq] - this can be found in the Developer Console under -\[dq]IAM & Admin\[dq] -> \[dq]Service Accounts\[dq], then \[dq]View -Client ID\[dq] for the newly created service account. -It is a \[ti]21 character numerical string. -.IP \[bu] 2 -In the next field, \[dq]One or More API Scopes\[dq], enter -\f[C]https://www.googleapis.com/auth/drive\f[R] to grant access to -Google Drive specifically. -.SS 3. Configure rclone, assuming a new install -.IP -.nf -\f[C] -rclone config +The goal is to store data on an individual\[aq]s Drive account, who IS +a member of the domain. +We\[aq]ll call the domain **example.com**, and the user +**foo\[at]example.com**. -n/s/q> n # New -name>gdrive # Gdrive is an example name -Storage> # Select the number shown for Google Drive -client_id> # Can be left blank -client_secret> # Can be left blank -scope> # Select your scope, 1 for example -root_folder_id> # Can be left blank -service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes! -y/n> # Auto config, n +There\[aq]s a few steps we need to go through to accomplish this: + +##### 1. Create a service account for example.com + - To create a service account and obtain its credentials, go to the +[Google Developer Console](https://console.developers.google.com). + - You must have a project - create one if you don\[aq]t. + - Then go to \[dq]IAM & admin\[dq] -> \[dq]Service Accounts\[dq]. + - Use the \[dq]Create Service Account\[dq] button. Fill in \[dq]Service account name\[dq] +and \[dq]Service account ID\[dq] with something that identifies your client. + - Select \[dq]Create And Continue\[dq]. Step 2 and 3 are optional. + - These credentials are what rclone will use for authentication. +If you ever need to remove access, press the \[dq]Delete service +account key\[dq] button. + +##### 2. Allowing API access to example.com Google Drive + - Go to example.com\[aq]s admin console + - Go into \[dq]Security\[dq] (or use the search bar) + - Select \[dq]Show more\[dq] and then \[dq]Advanced settings\[dq] + - Select \[dq]Manage API client access\[dq] in the \[dq]Authentication\[dq] section + - In the \[dq]Client Name\[dq] field enter the service account\[aq]s +\[dq]Client ID\[dq] - this can be found in the Developer Console under +\[dq]IAM & Admin\[dq] -> \[dq]Service Accounts\[dq], then \[dq]View Client ID\[dq] for +the newly created service account. +It is a \[ti]21 character numerical string. + - In the next field, \[dq]One or More API Scopes\[dq], enter +\[ga]https://www.googleapis.com/auth/drive\[ga] +to grant access to Google Drive specifically. + +##### 3. Configure rclone, assuming a new install \f[R] .fi -.SS 4. Verify that it\[aq]s working -.IP \[bu] 2 -\f[C]rclone -v --drive-impersonate foo\[at]example.com lsf gdrive:backup\f[R] -.IP \[bu] 2 -The arguments do: -.RS 2 -.IP \[bu] 2 -\f[C]-v\f[R] - verbose logging -.IP \[bu] 2 -\f[C]--drive-impersonate foo\[at]example.com\f[R] - this is what does +.PP +rclone config +.PP +n/s/q> n # New name>gdrive # Gdrive is an example name Storage> # Select +the number shown for Google Drive client_id> # Can be left blank +client_secret> # Can be left blank scope> # Select your scope, 1 for +example root_folder_id> # Can be left blank service_account_file> +/home/foo/myJSONfile.json # This is where the JSON file goes! y/n> # +Auto config, n +.IP +.nf +\f[C] +##### 4. Verify that it\[aq]s working + - \[ga]rclone -v --drive-impersonate foo\[at]example.com lsf gdrive:backup\[ga] + - The arguments do: + - \[ga]-v\[ga] - verbose logging + - \[ga]--drive-impersonate foo\[at]example.com\[ga] - this is what does the magic, pretending to be user foo. -.IP \[bu] 2 -\f[C]lsf\f[R] - list files in a parsing friendly way -.IP \[bu] 2 -\f[C]gdrive:backup\f[R] - use the remote called gdrive, work in the -folder named backup. -.RE -.PP -Note: in case you configured a specific root folder on gdrive and rclone -is unable to access the contents of that folder when using -\f[C]--drive-impersonate\f[R], do this instead: - in the gdrive web -interface, share your root folder with the user/email of the new Service -Account you created/selected at step #1 - use rclone without specifying -the \f[C]--drive-impersonate\f[R] option, like this: -\f[C]rclone -v lsf gdrive:backup\f[R] -.SS Shared drives (team drives) -.PP + - \[ga]lsf\[ga] - list files in a parsing friendly way + - \[ga]gdrive:backup\[ga] - use the remote called gdrive, work in +the folder named backup. + +Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using \[ga]--drive-impersonate\[ga], do this instead: + - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 + - use rclone without specifying the \[ga]--drive-impersonate\[ga] option, like this: + \[ga]rclone -v lsf gdrive:backup\[ga] + + +### Shared drives (team drives) + If you want to configure the remote to point to a Google Shared Drive -(previously known as Team Drives) then answer \f[C]y\f[R] to the -question \f[C]Configure this as a Shared Drive (Team Drive)?\f[R]. -.PP +(previously known as Team Drives) then answer \[ga]y\[ga] to the question +\[ga]Configure this as a Shared Drive (Team Drive)?\[ga]. + This will fetch the list of Shared Drives from google and allow you to -configure which one you want to use. -You can also type in a Shared Drive ID if you prefer. -.PP +configure which one you want to use. You can also type in a Shared +Drive ID if you prefer. + For example: -.IP -.nf -\f[C] +\f[R] +.fi +.PP Configure this as a Shared Drive (Team Drive)? -y) Yes -n) No -y/n> y -Fetching Shared Drive list... -Choose a number from below, or type in your own value - 1 / Rclone Test - \[rs] \[dq]xxxxxxxxxxxxxxxxxxxx\[dq] - 2 / Rclone Test 2 - \[rs] \[dq]yyyyyyyyyyyyyyyyyyyy\[dq] - 3 / Rclone Test 3 - \[rs] \[dq]zzzzzzzzzzzzzzzzzzzz\[dq] -Enter a Shared Drive ID> 1 --------------------- -[remote] -client_id = -client_secret = -token = {\[dq]AccessToken\[dq]:\[dq]xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]RefreshToken\[dq]:\[dq]1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]Expiry\[dq]:\[dq]2014-03-16T13:57:58.955387075Z\[dq],\[dq]Extra\[dq]:null} -team_drive = xxxxxxxxxxxxxxxxxxxx --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.PP -It does this by combining multiple \f[C]list\f[R] calls into a single -API request. -.PP -This works by combining many \f[C]\[aq]%s\[aq] in parents\f[R] filters -into one expression. -To list the contents of directories a, b and c, the following requests -will be send by the regular \f[C]List\f[R] function: +y) Yes n) No y/n> y Fetching Shared Drive list... +Choose a number from below, or type in your own value 1 / Rclone Test +\ \[dq]xxxxxxxxxxxxxxxxxxxx\[dq] 2 / Rclone Test 2 +\ \[dq]yyyyyyyyyyyyyyyyyyyy\[dq] 3 / Rclone Test 3 +\ \[dq]zzzzzzzzzzzzzzzzzzzz\[dq] Enter a Shared Drive ID> 1 +-------------------- [remote] client_id = client_secret = token = +{\[dq]AccessToken\[dq]:\[dq]xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]RefreshToken\[dq]:\[dq]1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]Expiry\[dq]:\[dq]2014-03-16T13:57:58.955387075Z\[dq],\[dq]Extra\[dq]:null} +team_drive = xxxxxxxxxxxxxxxxxxxx -------------------- y) Yes this is OK +e) Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -trashed=false and \[aq]a\[aq] in parents -trashed=false and \[aq]b\[aq] in parents -trashed=false and \[aq]c\[aq] in parents +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +It does this by combining multiple \[ga]list\[ga] calls into a single API request. + +This works by combining many \[ga]\[aq]%s\[aq] in parents\[ga] filters into one expression. +To list the contents of directories a, b and c, the following requests will be send by the regular \[ga]List\[ga] function: \f[R] .fi .PP +trashed=false and \[aq]a\[aq] in parents trashed=false and \[aq]b\[aq] +in parents trashed=false and \[aq]c\[aq] in parents +.IP +.nf +\f[C] These can now be combined into a single request: -.IP -.nf -\f[C] -trashed=false and (\[aq]a\[aq] in parents or \[aq]b\[aq] in parents or \[aq]c\[aq] in parents) \f[R] .fi .PP -The implementation of \f[C]ListR\f[R] will put up to 50 -\f[C]parents\f[R] filters into one request. -It will use the \f[C]--checkers\f[R] value to specify the number of -requests to run in parallel. -.PP -In tests, these batch requests were up to 20x faster than the regular -method. +trashed=false and (\[aq]a\[aq] in parents or \[aq]b\[aq] in parents or +\[aq]c\[aq] in parents) +.IP +.nf +\f[C] +The implementation of \[ga]ListR\[ga] will put up to 50 \[ga]parents\[ga] filters into one request. +It will use the \[ga]--checkers\[ga] value to specify the number of requests to run in parallel. + +In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: -.IP -.nf -\f[C] -rclone lsjson -vv -R --checkers=6 gdrive:folder \f[R] .fi .PP +rclone lsjson -vv -R --checkers=6 gdrive:folder +.IP +.nf +\f[C] small folder (220 directories, 700 files): -.IP \[bu] 2 -without \f[C]--fast-list\f[R]: 38s -.IP \[bu] 2 -with \f[C]--fast-list\f[R]: 10s -.PP + +- without \[ga]--fast-list\[ga]: 38s +- with \[ga]--fast-list\[ga]: 10s + large folder (10600 directories, 39000 files): -.IP \[bu] 2 -without \f[C]--fast-list\f[R]: 22:05 min -.IP \[bu] 2 -with \f[C]--fast-list\f[R]: 58s -.SS Modified time -.PP + +- without \[ga]--fast-list\[ga]: 22:05 min +- with \[ga]--fast-list\[ga]: 58s + +### Modified time + Google drive stores modification times accurate to 1 ms. -.SS Restricted filename characters -.PP -Only Invalid UTF-8 bytes will be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.PP -In contrast to other backends, \f[C]/\f[R] can also be used in names and -\f[C].\f[R] or \f[C]..\f[R] are valid names. -.SS Revisions -.PP -Google drive stores revisions of files. -When you upload a change to an existing file to google drive using -rclone it will create a new revision of that file. -.PP -Revisions follow the standard google policy which at time of writing was -.IP \[bu] 2 -They are deleted after 30 days or 100 revisions (whatever comes first). -.IP \[bu] 2 -They do not count towards a user storage quota. -.SS Deleting files -.PP -By default rclone will send all files to the trash when deleting files. -If deleting them permanently is required then use the -\f[C]--drive-use-trash=false\f[R] flag, or set the equivalent -environment variable. -.SS Shortcuts -.PP + +### Restricted filename characters + +Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +In contrast to other backends, \[ga]/\[ga] can also be used in names and \[ga].\[ga] +or \[ga]..\[ga] are valid names. + +### Revisions + +Google drive stores revisions of files. When you upload a change to +an existing file to google drive using rclone it will create a new +revision of that file. + +Revisions follow the standard google policy which at time of writing +was + + * They are deleted after 30 days or 100 revisions (whatever comes first). + * They do not count towards a user storage quota. + +### Deleting files + +By default rclone will send all files to the trash when deleting +files. If deleting them permanently is required then use the +\[ga]--drive-use-trash=false\[ga] flag, or set the equivalent environment +variable. + +### Shortcuts + In March 2020 Google introduced a new feature in Google Drive called -drive shortcuts (https://support.google.com/drive/answer/9700156) -(API (https://developers.google.com/drive/api/v3/shortcuts)). -These will (by September 2020) replace the ability for files or folders -to be in multiple folders at -once (https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models). -.PP +[drive shortcuts](https://support.google.com/drive/answer/9700156) +([API](https://developers.google.com/drive/api/v3/shortcuts)). These +will (by September 2020) [replace the ability for files or folders to +be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/simplifying-google-drives-folder-structure-and-sharing-models). + Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data -(e.g. -the inode in unix terms) so they don\[aq]t break if the source is +(e.g. the inode in unix terms) so they don\[aq]t break if the source is renamed or moved about. -.PP + By default rclone treats these as follows. -.PP + For shortcuts pointing to files: -.IP \[bu] 2 -When listing a file shortcut appears as the destination file. -.IP \[bu] 2 -When downloading the contents of the destination file is downloaded. -.IP \[bu] 2 -When updating shortcut file with a non shortcut file, the shortcut is -removed then a new file is uploaded in place of the shortcut. -.IP \[bu] 2 -When server-side moving (renaming) the shortcut is renamed, not the -destination file. -.IP \[bu] 2 -When server-side copying the shortcut is copied, not the contents of the -shortcut. -(unless \f[C]--drive-copy-shortcut-content\f[R] is in use in which case -the contents of the shortcut gets copied). -.IP \[bu] 2 -When deleting the shortcut is deleted not the linked file. -.IP \[bu] 2 -When setting the modification time, the modification time of the linked -file will be set. -.PP + +- When listing a file shortcut appears as the destination file. +- When downloading the contents of the destination file is downloaded. +- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. +- When server-side moving (renaming) the shortcut is renamed, not the destination file. +- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless \[ga]--drive-copy-shortcut-content\[ga] is in use in which case the contents of the shortcut gets copied). +- When deleting the shortcut is deleted not the linked file. +- When setting the modification time, the modification time of the linked file will be set. + For shortcuts pointing to folders: -.IP \[bu] 2 -When listing the shortcut appears as a folder and that folder will -contain the contents of the linked folder appear (including any sub -folders) -.IP \[bu] 2 -When downloading the contents of the linked folder and sub contents are -downloaded -.IP \[bu] 2 -When uploading to a shortcut folder the file will be placed in the -linked folder -.IP \[bu] 2 -When server-side moving (renaming) the shortcut is renamed, not the -destination folder -.IP \[bu] 2 -When server-side copying the contents of the linked folder is copied, -not the shortcut. -.IP \[bu] 2 -When deleting with \f[C]rclone rmdir\f[R] or \f[C]rclone purge\f[R] the -shortcut is deleted not the linked folder. -.IP \[bu] 2 -\f[B]NB\f[R] When deleting with \f[C]rclone remove\f[R] or -\f[C]rclone mount\f[R] the contents of the linked folder will be -deleted. -.PP -The rclone backend (https://rclone.org/commands/rclone_backend/) command -can be used to create shortcuts. -.PP -Shortcuts can be completely ignored with the -\f[C]--drive-skip-shortcuts\f[R] flag or the corresponding -\f[C]skip_shortcuts\f[R] configuration setting. -.SS Emptying trash -.PP -If you wish to empty your trash you can use the -\f[C]rclone cleanup remote:\f[R] command which will permanently delete -all your trashed files. -This command does not take any path arguments. -.PP + +- When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) +- When downloading the contents of the linked folder and sub contents are downloaded +- When uploading to a shortcut folder the file will be placed in the linked folder +- When server-side moving (renaming) the shortcut is renamed, not the destination folder +- When server-side copying the contents of the linked folder is copied, not the shortcut. +- When deleting with \[ga]rclone rmdir\[ga] or \[ga]rclone purge\[ga] the shortcut is deleted not the linked folder. +- **NB** When deleting with \[ga]rclone remove\[ga] or \[ga]rclone mount\[ga] the contents of the linked folder will be deleted. + +The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. + +Shortcuts can be completely ignored with the \[ga]--drive-skip-shortcuts\[ga] flag +or the corresponding \[ga]skip_shortcuts\[ga] configuration setting. + +### Emptying trash + +If you wish to empty your trash you can use the \[ga]rclone cleanup remote:\[ga] +command which will permanently delete all your trashed files. This command +does not take any path arguments. + Note that Google Drive takes some time (minutes to days) to empty the -trash even though the command returns within a few seconds. -No output is echoed, so there will be no confirmation even using -v or --vv. -.SS Quota information -.PP -To view your current quota you can use the -\f[C]rclone about remote:\f[R] command which will display your usage -limit (quota), the usage in Google Drive, the size of all files in the -Trash and the space used by other Google services such as Gmail. -This command does not take any path arguments. -.SS Import/Export of google documents -.PP +trash even though the command returns within a few seconds. No output +is echoed, so there will be no confirmation even using -v or -vv. + +### Quota information + +To view your current quota you can use the \[ga]rclone about remote:\[ga] +command which will display your usage limit (quota), the usage in Google +Drive, the size of all files in the Trash and the space used by other +Google services such as Gmail. This command does not take any path +arguments. + +#### Import/Export of google documents + Google documents can be exported from and uploaded to Google Drive. -.PP + When rclone downloads a Google doc it chooses a format to download -depending upon the \f[C]--drive-export-formats\f[R] setting. -By default the export formats are \f[C]docx,xlsx,pptx,svg\f[R] which are -a sensible default for an editable document. -.PP -When choosing a format, rclone runs down the list provided in order and -chooses the first file format the doc can be exported as from the list. -If the file can\[aq]t be exported to a format on the formats list, then -rclone will choose a format from the default list. -.PP -If you prefer an archive copy then you might use -\f[C]--drive-export-formats pdf\f[R], or if you prefer -openoffice/libreoffice formats you might use -\f[C]--drive-export-formats ods,odt,odp\f[R]. -.PP +depending upon the \[ga]--drive-export-formats\[ga] setting. +By default the export formats are \[ga]docx,xlsx,pptx,svg\[ga] which are a +sensible default for an editable document. + +When choosing a format, rclone runs down the list provided in order +and chooses the first file format the doc can be exported as from the +list. If the file can\[aq]t be exported to a format on the formats list, +then rclone will choose a format from the default list. + +If you prefer an archive copy then you might use \[ga]--drive-export-formats +pdf\[ga], or if you prefer openoffice/libreoffice formats you might use +\[ga]--drive-export-formats ods,odt,odp\[ga]. + Note that rclone adds the extension to the google doc, so if it is -called \f[C]My Spreadsheet\f[R] on google docs, it will be exported as -\f[C]My Spreadsheet.xlsx\f[R] or \f[C]My Spreadsheet.pdf\f[R] etc. -.PP -When importing files into Google Drive, rclone will convert all files -with an extension in \f[C]--drive-import-formats\f[R] to their +called \[ga]My Spreadsheet\[ga] on google docs, it will be exported as \[ga]My +Spreadsheet.xlsx\[ga] or \[ga]My Spreadsheet.pdf\[ga] etc. + +When importing files into Google Drive, rclone will convert all +files with an extension in \[ga]--drive-import-formats\[ga] to their associated document type. -rclone will not convert any files by default, since the conversion is -lossy process. -.PP -The conversion must result in a file with the same extension when the -\f[C]--drive-export-formats\f[R] rules are applied to the uploaded -document. -.PP +rclone will not convert any files by default, since the conversion +is lossy process. + +The conversion must result in a file with the same extension when +the \[ga]--drive-export-formats\[ga] rules are applied to the uploaded document. + Here are some examples for allowed and prohibited conversions. -.PP -.TS -tab(@); -l l l l l. -T{ -export-formats -T}@T{ -import-formats -T}@T{ -Upload Ext -T}@T{ -Document Ext -T}@T{ -Allowed -T} -_ -T{ -odt -T}@T{ -odt -T}@T{ -odt -T}@T{ -odt -T}@T{ -Yes -T} -T{ -odt -T}@T{ -docx,odt -T}@T{ -odt -T}@T{ -odt -T}@T{ -Yes -T} -T{ -T}@T{ -docx -T}@T{ -docx -T}@T{ -docx -T}@T{ -Yes -T} -T{ -T}@T{ -odt -T}@T{ -odt -T}@T{ -docx -T}@T{ -No -T} -T{ -odt,docx -T}@T{ -docx,odt -T}@T{ -docx -T}@T{ -odt -T}@T{ -No -T} -T{ -docx,odt -T}@T{ -docx,odt -T}@T{ -docx -T}@T{ -docx -T}@T{ -Yes -T} -T{ -docx,odt -T}@T{ -docx,odt -T}@T{ -odt -T}@T{ -docx -T}@T{ -No -T} -.TE -.PP -This limitation can be disabled by specifying -\f[C]--drive-allow-import-name-change\f[R]. + +| export-formats | import-formats | Upload Ext | Document Ext | Allowed | +| -------------- | -------------- | ---------- | ------------ | ------- | +| odt | odt | odt | odt | Yes | +| odt | docx,odt | odt | odt | Yes | +| | docx | docx | docx | Yes | +| | odt | odt | docx | No | +| odt,docx | docx,odt | docx | odt | No | +| docx,odt | docx,odt | docx | docx | Yes | +| docx,odt | docx,odt | odt | docx | No | + +This limitation can be disabled by specifying \[ga]--drive-allow-import-name-change\[ga]. When using this flag, rclone can convert multiple files types resulting -in the same document type at once, e.g. -with \f[C]--drive-import-formats docx,odt,txt\f[R], all files having -these extension would result in a document represented as a docx file. -This brings the additional risk of overwriting a document, if multiple -files have the same stem. -Many rclone operations will not handle this name change in any way. -They assume an equal name when copying files and might copy the file -again or delete them when the name changes. -.PP -Here are the possible export extensions with their corresponding mime -types. -Most of these can also be used for importing, but there more that are -not listed here. -Some of these additional ones might only be available when the operating -system provides the correct MIME type entries. -.PP +in the same document type at once, e.g. with \[ga]--drive-import-formats docx,odt,txt\[ga], +all files having these extension would result in a document represented as a docx file. +This brings the additional risk of overwriting a document, if multiple files +have the same stem. Many rclone operations will not handle this name change +in any way. They assume an equal name when copying files and might copy the +file again or delete them when the name changes. + +Here are the possible export extensions with their corresponding mime types. +Most of these can also be used for importing, but there more that are not +listed here. Some of these additional ones might only be available when +the operating system provides the correct MIME type entries. + This list can be changed by Google Drive at any time and might not represent the currently available conversions. -.PP -.TS -tab(@); -lw(19.7n) lw(24.1n) lw(26.2n). -T{ -Extension -T}@T{ -Mime Type -T}@T{ -Description -T} -_ -T{ -bmp -T}@T{ -image/bmp -T}@T{ -Windows Bitmap format -T} -T{ -csv -T}@T{ -text/csv -T}@T{ -Standard CSV format for Spreadsheets -T} -T{ -doc -T}@T{ -application/msword -T}@T{ -Classic Word file -T} -T{ -docx -T}@T{ -application/vnd.openxmlformats-officedocument.wordprocessingml.document -T}@T{ -Microsoft Office Document -T} -T{ -epub -T}@T{ -application/epub+zip -T}@T{ -E-book format -T} -T{ -html -T}@T{ -text/html -T}@T{ -An HTML Document -T} -T{ -jpg -T}@T{ -image/jpeg -T}@T{ -A JPEG Image File -T} -T{ -json -T}@T{ -application/vnd.google-apps.script+json -T}@T{ -JSON Text Format for Google Apps scripts -T} -T{ -odp -T}@T{ -application/vnd.oasis.opendocument.presentation -T}@T{ -Openoffice Presentation -T} -T{ -ods -T}@T{ -application/vnd.oasis.opendocument.spreadsheet -T}@T{ -Openoffice Spreadsheet -T} -T{ -ods -T}@T{ -application/x-vnd.oasis.opendocument.spreadsheet -T}@T{ -Openoffice Spreadsheet -T} -T{ -odt -T}@T{ -application/vnd.oasis.opendocument.text -T}@T{ -Openoffice Document -T} -T{ -pdf -T}@T{ -application/pdf -T}@T{ -Adobe PDF Format -T} -T{ -pjpeg -T}@T{ -image/pjpeg -T}@T{ -Progressive JPEG Image -T} -T{ -png -T}@T{ -image/png -T}@T{ -PNG Image Format -T} -T{ -pptx -T}@T{ -application/vnd.openxmlformats-officedocument.presentationml.presentation -T}@T{ -Microsoft Office Powerpoint -T} -T{ -rtf -T}@T{ -application/rtf -T}@T{ -Rich Text Format -T} -T{ -svg -T}@T{ -image/svg+xml -T}@T{ -Scalable Vector Graphics Format -T} -T{ -tsv -T}@T{ -text/tab-separated-values -T}@T{ -Standard TSV format for spreadsheets -T} -T{ -txt -T}@T{ -text/plain -T}@T{ -Plain Text -T} -T{ -wmf -T}@T{ -application/x-msmetafile -T}@T{ -Windows Meta File -T} -T{ -xls -T}@T{ -application/vnd.ms-excel -T}@T{ -Classic Excel file -T} -T{ -xlsx -T}@T{ -application/vnd.openxmlformats-officedocument.spreadsheetml.sheet -T}@T{ -Microsoft Office Spreadsheet -T} -T{ -zip -T}@T{ -application/zip -T}@T{ -A ZIP file of HTML, Images CSS -T} -.TE -.PP -Google documents can also be exported as link files. -These files will open a browser window for the Google Docs website of -that document when opened. -The link file extension has to be specified as a -\f[C]--drive-export-formats\f[R] parameter. -They will match all available Google Documents. -.PP -.TS -tab(@); -l l l. -T{ -Extension -T}@T{ -Description -T}@T{ -OS Support -T} -_ -T{ -desktop -T}@T{ -freedesktop.org specified desktop entry -T}@T{ -Linux -T} -T{ -link.html -T}@T{ -An HTML Document with a redirect -T}@T{ -All -T} -T{ -url -T}@T{ -INI style link file -T}@T{ -macOS, Windows -T} -T{ -webloc -T}@T{ -macOS specific XML format -T}@T{ -macOS -T} -.TE -.SS Standard options -.PP + +| Extension | Mime Type | Description | +| --------- |-----------| ------------| +| bmp | image/bmp | Windows Bitmap format | +| csv | text/csv | Standard CSV format for Spreadsheets | +| doc | application/msword | Classic Word file | +| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | +| epub | application/epub+zip | E-book format | +| html | text/html | An HTML Document | +| jpg | image/jpeg | A JPEG Image File | +| json | application/vnd.google-apps.script+json | JSON Text Format for Google Apps scripts | +| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | +| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | +| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | +| odt | application/vnd.oasis.opendocument.text | Openoffice Document | +| pdf | application/pdf | Adobe PDF Format | +| pjpeg | image/pjpeg | Progressive JPEG Image | +| png | image/png | PNG Image Format| +| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | +| rtf | application/rtf | Rich Text Format | +| svg | image/svg+xml | Scalable Vector Graphics Format | +| tsv | text/tab-separated-values | Standard TSV format for spreadsheets | +| txt | text/plain | Plain Text | +| wmf | application/x-msmetafile | Windows Meta File | +| xls | application/vnd.ms-excel | Classic Excel file | +| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | +| zip | application/zip | A ZIP file of HTML, Images CSS | + +Google documents can also be exported as link files. These files will +open a browser window for the Google Docs website of that document +when opened. The link file extension has to be specified as a +\[ga]--drive-export-formats\[ga] parameter. They will match all available +Google Documents. + +| Extension | Description | OS Support | +| --------- | ----------- | ---------- | +| desktop | freedesktop.org specified desktop entry | Linux | +| link.html | An HTML Document with a redirect | All | +| url | INI style link file | macOS, Windows | +| webloc | macOS specific XML format | macOS | + + +### Standard options + Here are the Standard options specific to drive (Google Drive). -.SS --drive-client-id -.PP -Google Application Client Id Setting your own is recommended. -See https://rclone.org/drive/#making-your-own-client-id for how to -create your own. -If you leave this blank, it will use an internal key which is low -performance. -.PP + +#### --drive-client-id + +Google Application Client Id +Setting your own is recommended. +See https://rclone.org/drive/#making-your-own-client-id for how to create your own. +If you leave this blank, it will use an internal key which is low performance. + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_DRIVE_CLIENT_ID +- Type: string +- Required: false + +#### --drive-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-scope -.PP + +- Config: client_secret +- Env Var: RCLONE_DRIVE_CLIENT_SECRET +- Type: string +- Required: false + +#### --drive-scope + Scope that rclone should use when requesting access from drive. -.PP + Properties: -.IP \[bu] 2 -Config: scope -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SCOPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]drive\[dq] -.RS 2 -.IP \[bu] 2 -Full access all files, excluding Application Data Folder. -.RE -.IP \[bu] 2 -\[dq]drive.readonly\[dq] -.RS 2 -.IP \[bu] 2 -Read-only access to file metadata and file contents. -.RE -.IP \[bu] 2 -\[dq]drive.file\[dq] -.RS 2 -.IP \[bu] 2 -Access to files created by rclone only. -.IP \[bu] 2 -These are visible in the drive website. -.IP \[bu] 2 -File authorization is revoked when the user deauthorizes the app. -.RE -.IP \[bu] 2 -\[dq]drive.appfolder\[dq] -.RS 2 -.IP \[bu] 2 -Allows read and write access to the Application Data folder. -.IP \[bu] 2 -This is not visible in the drive website. -.RE -.IP \[bu] 2 -\[dq]drive.metadata.readonly\[dq] -.RS 2 -.IP \[bu] 2 -Allows read-only access to file metadata but -.IP \[bu] 2 -does not allow any access to read or download file content. -.RE -.RE -.SS --drive-service-account-file -.PP + +- Config: scope +- Env Var: RCLONE_DRIVE_SCOPE +- Type: string +- Required: false +- Examples: + - \[dq]drive\[dq] + - Full access all files, excluding Application Data Folder. + - \[dq]drive.readonly\[dq] + - Read-only access to file metadata and file contents. + - \[dq]drive.file\[dq] + - Access to files created by rclone only. + - These are visible in the drive website. + - File authorization is revoked when the user deauthorizes the app. + - \[dq]drive.appfolder\[dq] + - Allows read and write access to the Application Data folder. + - This is not visible in the drive website. + - \[dq]drive.metadata.readonly\[dq] + - Allows read-only access to file metadata but + - does not allow any access to read or download file content. + +#### --drive-service-account-file + Service Account Credentials JSON file path. -.PP + Leave blank normally. Needed only if you want use SA instead of interactive login. -.PP -Leading \f[C]\[ti]\f[R] will be expanded in the file name as will -environment variables such as \f[C]${RCLONE_CONFIG_DIR}\f[R]. -.PP + +Leading \[ga]\[ti]\[ga] will be expanded in the file name as will environment variables such as \[ga]${RCLONE_CONFIG_DIR}\[ga]. + Properties: -.IP \[bu] 2 -Config: service_account_file -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-alternate-export -.PP + +- Config: service_account_file +- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE +- Type: string +- Required: false + +#### --drive-alternate-export + Deprecated: No longer needed. -.PP + Properties: -.IP \[bu] 2 -Config: alternate_export -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS Advanced options -.PP + +- Config: alternate_export +- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT +- Type: bool +- Default: false + +### Advanced options + Here are the Advanced options specific to drive (Google Drive). -.SS --drive-token -.PP + +#### --drive-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-auth-url -.PP + +- Config: token +- Env Var: RCLONE_DRIVE_TOKEN +- Type: string +- Required: false + +#### --drive-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_DRIVE_AUTH_URL +- Type: string +- Required: false + +#### --drive-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-root-folder-id -.PP + +- Config: token_url +- Env Var: RCLONE_DRIVE_TOKEN_URL +- Type: string +- Required: false + +#### --drive-root-folder-id + ID of the root folder. Leave blank normally. -.PP -Fill in to access \[dq]Computers\[dq] folders (see docs), or for rclone -to use a non root folder as its starting point. -.PP + +Fill in to access \[dq]Computers\[dq] folders (see docs), or for rclone to use +a non root folder as its starting point. + + Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-service-account-credentials -.PP + +- Config: root_folder_id +- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID +- Type: string +- Required: false + +#### --drive-service-account-credentials + Service Account Credentials JSON blob. -.PP + Leave blank normally. Needed only if you want use SA instead of interactive login. -.PP + Properties: -.IP \[bu] 2 -Config: service_account_credentials -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-team-drive -.PP + +- Config: service_account_credentials +- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS +- Type: string +- Required: false + +#### --drive-team-drive + ID of the Shared Drive (Team Drive). -.PP + Properties: -.IP \[bu] 2 -Config: team_drive -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_TEAM_DRIVE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-auth-owner-only -.PP + +- Config: team_drive +- Env Var: RCLONE_DRIVE_TEAM_DRIVE +- Type: string +- Required: false + +#### --drive-auth-owner-only + Only consider files owned by the authenticated user. -.PP + Properties: -.IP \[bu] 2 -Config: auth_owner_only -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-use-trash -.PP + +- Config: auth_owner_only +- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY +- Type: bool +- Default: false + +#### --drive-use-trash + Send files to the trash instead of deleting permanently. -.PP + Defaults to true, namely sending files to the trash. -Use \f[C]--drive-use-trash=false\f[R] to delete files permanently -instead. -.PP +Use \[ga]--drive-use-trash=false\[ga] to delete files permanently instead. + Properties: -.IP \[bu] 2 -Config: use_trash -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_USE_TRASH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.SS --drive-copy-shortcut-content -.PP + +- Config: use_trash +- Env Var: RCLONE_DRIVE_USE_TRASH +- Type: bool +- Default: true + +#### --drive-copy-shortcut-content + Server side copy contents of shortcuts instead of the shortcut. -.PP + When doing server side copies, normally rclone will copy shortcuts as shortcuts. -.PP + If this flag is used then rclone will copy the contents of shortcuts rather than shortcuts themselves when doing server side copies. -.PP + Properties: -.IP \[bu] 2 -Config: copy_shortcut_content -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-skip-gdocs -.PP + +- Config: copy_shortcut_content +- Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT +- Type: bool +- Default: false + +#### --drive-skip-gdocs + Skip google documents in all listings. -.PP + If given, gdocs practically become invisible to rclone. -.PP + Properties: -.IP \[bu] 2 -Config: skip_gdocs -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SKIP_GDOCS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-skip-checksum-gphotos -.PP + +- Config: skip_gdocs +- Env Var: RCLONE_DRIVE_SKIP_GDOCS +- Type: bool +- Default: false + +#### --drive-skip-checksum-gphotos + Skip MD5 checksum on Google photos and videos only. -.PP + Use this if you get checksum errors when transferring Google photos or videos. -.PP -Setting this flag will cause Google photos and videos to return a blank -MD5 checksum. -.PP + +Setting this flag will cause Google photos and videos to return a +blank MD5 checksum. + Google photos are identified by being in the \[dq]photos\[dq] space. -.PP + Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. -.PP + Properties: -.IP \[bu] 2 -Config: skip_checksum_gphotos -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-shared-with-me -.PP + +- Config: skip_checksum_gphotos +- Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS +- Type: bool +- Default: false + +#### --drive-shared-with-me + Only show files that are shared with me. -.PP -Instructs rclone to operate on your \[dq]Shared with me\[dq] folder -(where Google Drive lets you access the files and folders others have -shared with you). -.PP -This works both with the \[dq]list\[dq] (lsd, lsl, etc.) and the -\[dq]copy\[dq] commands (copy, sync, etc.), and with all other commands -too. -.PP + +Instructs rclone to operate on your \[dq]Shared with me\[dq] folder (where +Google Drive lets you access the files and folders others have shared +with you). + +This works both with the \[dq]list\[dq] (lsd, lsl, etc.) and the \[dq]copy\[dq] +commands (copy, sync, etc.), and with all other commands too. + Properties: -.IP \[bu] 2 -Config: shared_with_me -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SHARED_WITH_ME -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-trashed-only -.PP + +- Config: shared_with_me +- Env Var: RCLONE_DRIVE_SHARED_WITH_ME +- Type: bool +- Default: false + +#### --drive-trashed-only + Only show files that are in the trash. -.PP + This will show trashed files in their original directory structure. -.PP + Properties: -.IP \[bu] 2 -Config: trashed_only -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_TRASHED_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-starred-only -.PP + +- Config: trashed_only +- Env Var: RCLONE_DRIVE_TRASHED_ONLY +- Type: bool +- Default: false + +#### --drive-starred-only + Only show files that are starred. -.PP + Properties: -.IP \[bu] 2 -Config: starred_only -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_STARRED_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-formats -.PP + +- Config: starred_only +- Env Var: RCLONE_DRIVE_STARRED_ONLY +- Type: bool +- Default: false + +#### --drive-formats + Deprecated: See export_formats. -.PP + Properties: -.IP \[bu] 2 -Config: formats -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_FORMATS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-export-formats -.PP + +- Config: formats +- Env Var: RCLONE_DRIVE_FORMATS +- Type: string +- Required: false + +#### --drive-export-formats + Comma separated list of preferred formats for downloading Google docs. -.PP + Properties: -.IP \[bu] 2 -Config: export_formats -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_EXPORT_FORMATS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]docx,xlsx,pptx,svg\[dq] -.SS --drive-import-formats -.PP + +- Config: export_formats +- Env Var: RCLONE_DRIVE_EXPORT_FORMATS +- Type: string +- Default: \[dq]docx,xlsx,pptx,svg\[dq] + +#### --drive-import-formats + Comma separated list of preferred formats for uploading Google docs. -.PP + Properties: -.IP \[bu] 2 -Config: import_formats -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_IMPORT_FORMATS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-allow-import-name-change -.PP + +- Config: import_formats +- Env Var: RCLONE_DRIVE_IMPORT_FORMATS +- Type: string +- Required: false + +#### --drive-allow-import-name-change + Allow the filetype to change when uploading Google docs. -.PP -E.g. -file.doc to file.docx. -This will confuse sync and reupload every time. -.PP + +E.g. file.doc to file.docx. This will confuse sync and reupload every time. + Properties: -.IP \[bu] 2 -Config: allow_import_name_change -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-use-created-date -.PP + +- Config: allow_import_name_change +- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE +- Type: bool +- Default: false + +#### --drive-use-created-date + Use file created date instead of modified date. -.PP + Useful when downloading data and you want the creation date used in place of the last modified date. -.PP -\f[B]WARNING\f[R]: This flag may have some unexpected consequences. -.PP + +**WARNING**: This flag may have some unexpected consequences. + When uploading to your drive all files will be overwritten unless they -haven\[aq]t been modified since their creation. -And the inverse will occur while downloading. -This side effect can be avoided by using the \[dq]--checksum\[dq] flag. -.PP +haven\[aq]t been modified since their creation. And the inverse will occur +while downloading. This side effect can be avoided by using the +\[dq]--checksum\[dq] flag. + This feature was implemented to retain photos capture date as recorded -by google photos. -You will first need to check the \[dq]Create a Google Photos folder\[dq] -option in your google drive settings. -You can then copy or move the photos locally and use the date the image -was taken (created) set as the modification date. -.PP +by google photos. You will first need to check the \[dq]Create a Google +Photos folder\[dq] option in your google drive settings. You can then copy +or move the photos locally and use the date the image was taken +(created) set as the modification date. + Properties: -.IP \[bu] 2 -Config: use_created_date -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_USE_CREATED_DATE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-use-shared-date -.PP + +- Config: use_created_date +- Env Var: RCLONE_DRIVE_USE_CREATED_DATE +- Type: bool +- Default: false + +#### --drive-use-shared-date + Use date file was shared instead of modified date. -.PP -Note that, as with \[dq]--drive-use-created-date\[dq], this flag may -have unexpected consequences when uploading/downloading files. -.PP -If both this flag and \[dq]--drive-use-created-date\[dq] are set, the -created date is used. -.PP + +Note that, as with \[dq]--drive-use-created-date\[dq], this flag may have +unexpected consequences when uploading/downloading files. + +If both this flag and \[dq]--drive-use-created-date\[dq] are set, the created +date is used. + Properties: -.IP \[bu] 2 -Config: use_shared_date -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_USE_SHARED_DATE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-list-chunk -.PP + +- Config: use_shared_date +- Env Var: RCLONE_DRIVE_USE_SHARED_DATE +- Type: bool +- Default: false + +#### --drive-list-chunk + Size of listing chunk 100-1000, 0 to disable. -.PP + Properties: -.IP \[bu] 2 -Config: list_chunk -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_LIST_CHUNK -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 1000 -.SS --drive-impersonate -.PP + +- Config: list_chunk +- Env Var: RCLONE_DRIVE_LIST_CHUNK +- Type: int +- Default: 1000 + +#### --drive-impersonate + Impersonate this user when using a service account. -.PP + Properties: -.IP \[bu] 2 -Config: impersonate -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_IMPERSONATE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-upload-cutoff -.PP + +- Config: impersonate +- Env Var: RCLONE_DRIVE_IMPERSONATE +- Type: string +- Required: false + +#### --drive-upload-cutoff + Cutoff for switching to chunked upload. -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 8Mi -.SS --drive-chunk-size -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 8Mi + +#### --drive-chunk-size + Upload chunk size. -.PP + Must a power of 2 >= 256k. -.PP -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. -.PP + +Making this larger will improve performance, but note that each chunk +is buffered in memory one per transfer. + Reducing this will reduce memory usage but decrease performance. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 8Mi -.SS --drive-acknowledge-abuse -.PP -Set to allow files which return cannotDownloadAbusiveFile to be -downloaded. -.PP -If downloading a file returns the error \[dq]This file has been -identified as malware or spam and cannot be downloaded\[dq] with the -error code \[dq]cannotDownloadAbusiveFile\[dq] then supply this flag to -rclone to indicate you acknowledge the risks of downloading the file and -rclone will download it anyway. -.PP + +- Config: chunk_size +- Env Var: RCLONE_DRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 8Mi + +#### --drive-acknowledge-abuse + +Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + +If downloading a file returns the error \[dq]This file has been identified +as malware or spam and cannot be downloaded\[dq] with the error code +\[dq]cannotDownloadAbusiveFile\[dq] then supply this flag to rclone to +indicate you acknowledge the risks of downloading the file and rclone +will download it anyway. + Note that if you are using service account it will need Manager -permission (not Content Manager) to for this flag to work. -If the SA does not have the right permission, Google will just ignore -the flag. -.PP +permission (not Content Manager) to for this flag to work. If the SA +does not have the right permission, Google will just ignore the flag. + Properties: -.IP \[bu] 2 -Config: acknowledge_abuse -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-keep-revision-forever -.PP + +- Config: acknowledge_abuse +- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE +- Type: bool +- Default: false + +#### --drive-keep-revision-forever + Keep new head revision of each file forever. -.PP + Properties: -.IP \[bu] 2 -Config: keep_revision_forever -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-size-as-quota -.PP + +- Config: keep_revision_forever +- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER +- Type: bool +- Default: false + +#### --drive-size-as-quota + Show sizes as storage quota usage, not actual size. -.PP -Show the size of a file as the storage quota used. -This is the current version plus any older versions that have been set -to keep forever. -.PP -\f[B]WARNING\f[R]: This flag may have some unexpected consequences. -.PP -It is not recommended to set this flag in your config - the recommended -usage is using the flag form --drive-size-as-quota when doing rclone -ls/lsl/lsf/lsjson/etc only. -.PP -If you do use this flag for syncing (not recommended) then you will need -to use --ignore size also. -.PP + +Show the size of a file as the storage quota used. This is the +current version plus any older versions that have been set to keep +forever. + +**WARNING**: This flag may have some unexpected consequences. + +It is not recommended to set this flag in your config - the +recommended usage is using the flag form --drive-size-as-quota when +doing rclone ls/lsl/lsf/lsjson/etc only. + +If you do use this flag for syncing (not recommended) then you will +need to use --ignore size also. + Properties: -.IP \[bu] 2 -Config: size_as_quota -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-v2-download-min-size -.PP + +- Config: size_as_quota +- Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA +- Type: bool +- Default: false + +#### --drive-v2-download-min-size + If Object\[aq]s are greater, use drive v2 API to download. -.PP + Properties: -.IP \[bu] 2 -Config: v2_download_min_size -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: off -.SS --drive-pacer-min-sleep -.PP + +- Config: v2_download_min_size +- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE +- Type: SizeSuffix +- Default: off + +#### --drive-pacer-min-sleep + Minimum time to sleep between API calls. -.PP + Properties: -.IP \[bu] 2 -Config: pacer_min_sleep -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 100ms -.SS --drive-pacer-burst -.PP + +- Config: pacer_min_sleep +- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP +- Type: Duration +- Default: 100ms + +#### --drive-pacer-burst + Number of API calls to allow without sleeping. -.PP + Properties: -.IP \[bu] 2 -Config: pacer_burst -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_PACER_BURST -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 100 -.SS --drive-server-side-across-configs -.PP + +- Config: pacer_burst +- Env Var: RCLONE_DRIVE_PACER_BURST +- Type: int +- Default: 100 + +#### --drive-server-side-across-configs + Deprecated: use --server-side-across-configs instead. -.PP -Allow server-side operations (e.g. -copy) to work across different drive configs. -.PP + +Allow server-side operations (e.g. copy) to work across different drive configs. + This can be useful if you wish to do a server-side copy between two -different Google drives. -Note that this isn\[aq]t enabled by default because it isn\[aq]t easy to -tell if it will work between any two configurations. -.PP +different Google drives. Note that this isn\[aq]t enabled by default +because it isn\[aq]t easy to tell if it will work between any two +configurations. + Properties: -.IP \[bu] 2 -Config: server_side_across_configs -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-disable-http2 -.PP + +- Config: server_side_across_configs +- Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS +- Type: bool +- Default: false + +#### --drive-disable-http2 + Disable drive using http2. -.PP + There is currently an unsolved issue with the google drive backend and -HTTP/2. -HTTP/2 is therefore disabled by default for the drive backend but can be -re-enabled here. -When the issue is solved this flag will be removed. -.PP +HTTP/2. HTTP/2 is therefore disabled by default for the drive backend +but can be re-enabled here. When the issue is solved this flag will +be removed. + See: https://github.com/rclone/rclone/issues/3631 -.PP + + + Properties: -.IP \[bu] 2 -Config: disable_http2 -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_DISABLE_HTTP2 -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.SS --drive-stop-on-upload-limit -.PP + +- Config: disable_http2 +- Env Var: RCLONE_DRIVE_DISABLE_HTTP2 +- Type: bool +- Default: true + +#### --drive-stop-on-upload-limit + Make upload limit errors be fatal. -.PP + At the time of writing it is only possible to upload 750 GiB of data to -Google Drive a day (this is an undocumented limit). -When this limit is reached Google Drive produces a slightly different -error message. -When this flag is set it causes these errors to be fatal. -These will stop the in-progress sync. -.PP +Google Drive a day (this is an undocumented limit). When this limit is +reached Google Drive produces a slightly different error message. When +this flag is set it causes these errors to be fatal. These will stop +the in-progress sync. + Note that this detection is relying on error message strings which Google don\[aq]t document so it may break in the future. -.PP + See: https://github.com/rclone/rclone/issues/3857 -.PP + + Properties: -.IP \[bu] 2 -Config: stop_on_upload_limit -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-stop-on-download-limit -.PP + +- Config: stop_on_upload_limit +- Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT +- Type: bool +- Default: false + +#### --drive-stop-on-download-limit + Make download limit errors be fatal. -.PP -At the time of writing it is only possible to download 10 TiB of data -from Google Drive a day (this is an undocumented limit). -When this limit is reached Google Drive produces a slightly different -error message. -When this flag is set it causes these errors to be fatal. -These will stop the in-progress sync. -.PP + +At the time of writing it is only possible to download 10 TiB of data from +Google Drive a day (this is an undocumented limit). When this limit is +reached Google Drive produces a slightly different error message. When +this flag is set it causes these errors to be fatal. These will stop +the in-progress sync. + Note that this detection is relying on error message strings which Google don\[aq]t document so it may break in the future. -.PP + + Properties: -.IP \[bu] 2 -Config: stop_on_download_limit -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-skip-shortcuts -.PP + +- Config: stop_on_download_limit +- Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT +- Type: bool +- Default: false + +#### --drive-skip-shortcuts + If set skip shortcut files. -.PP + Normally rclone dereferences shortcut files making them appear as if -they are the original file (see the shortcuts section). +they are the original file (see [the shortcuts section](#shortcuts)). If this flag is set then rclone will ignore shortcut files completely. -.PP + + Properties: -.IP \[bu] 2 -Config: skip_shortcuts -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-skip-dangling-shortcuts -.PP + +- Config: skip_shortcuts +- Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS +- Type: bool +- Default: false + +#### --drive-skip-dangling-shortcuts + If set skip dangling shortcut files. -.PP -If this is set then rclone will not show any dangling shortcuts in -listings. -.PP + +If this is set then rclone will not show any dangling shortcuts in listings. + + Properties: -.IP \[bu] 2 -Config: skip_dangling_shortcuts -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --drive-resource-key -.PP + +- Config: skip_dangling_shortcuts +- Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS +- Type: bool +- Default: false + +#### --drive-resource-key + Resource key for accessing a link-shared file. -.PP + If you need to access files shared with a link like this -.IP -.nf -\f[C] -https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing -\f[R] -.fi -.PP -Then you will need to use the first part \[dq]XXX\[dq] as the -\[dq]root_folder_id\[dq] and the second part \[dq]YYY\[dq] as the -\[dq]resource_key\[dq] otherwise you will get 404 not found errors when -trying to access the directory. -.PP + + https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing + +Then you will need to use the first part \[dq]XXX\[dq] as the \[dq]root_folder_id\[dq] +and the second part \[dq]YYY\[dq] as the \[dq]resource_key\[dq] otherwise you will get +404 not found errors when trying to access the directory. + See: https://developers.google.com/drive/api/guides/resource-keys -.PP + This resource key requirement only applies to a subset of old files. -.PP + Note also that opening the folder once in the web interface (with the -user you\[aq]ve authenticated rclone with) seems to be enough so that -the resource key is no needed. -.PP +user you\[aq]ve authenticated rclone with) seems to be enough so that the +resource key is not needed. + + Properties: -.IP \[bu] 2 -Config: resource_key -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_RESOURCE_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --drive-encoding -.PP + +- Config: resource_key +- Env Var: RCLONE_DRIVE_RESOURCE_KEY +- Type: string +- Required: false + +#### --drive-fast-list-bug-fix + +Work around a bug in Google Drive listing. + +Normally rclone will work around a bug in Google Drive when using +--fast-list (ListR) where the search \[dq](A in parents) or (B in +parents)\[dq] returns nothing sometimes. See #3114, #4289 and +https://issuetracker.google.com/issues/149522397 + +Rclone detects this by finding no items in more than one directory +when listing and retries them as lists of individual directories. + +This means that if you have a lot of empty directories rclone will end +up listing them all individually and this can take many more API +calls. + +This flag allows the work-around to be disabled. This is **not** +recommended in normal use - only if you have a particular case you are +having trouble with like many empty directories. + + +Properties: + +- Config: fast_list_bug_fix +- Env Var: RCLONE_DRIVE_FAST_LIST_BUG_FIX +- Type: bool +- Default: true + +#### --drive-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: InvalidUtf8 -.SS --drive-env-auth -.PP -Get IAM credentials from runtime (environment variables or instance meta -data if no env vars). -.PP -Only applies if service_account_file and service_account_credentials is -blank. -.PP + +- Config: encoding +- Env Var: RCLONE_DRIVE_ENCODING +- Type: MultiEncoder +- Default: InvalidUtf8 + +#### --drive-env-auth + +Get IAM credentials from runtime (environment variables or instance meta data if no env vars). + +Only applies if service_account_file and service_account_credentials is blank. + Properties: -.IP \[bu] 2 -Config: env_auth -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ENV_AUTH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Enter credentials in the next step. -.RE -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Get GCP IAM credentials from the environment (env vars or IAM). -.RE -.RE -.SS Backend commands -.PP + +- Config: env_auth +- Env Var: RCLONE_DRIVE_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - \[dq]false\[dq] + - Enter credentials in the next step. + - \[dq]true\[dq] + - Get GCP IAM credentials from the environment (env vars or IAM). + +## Backend commands + Here are the commands specific to the drive backend. -.PP + Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP + + rclone backend COMMAND remote: + The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS get -.PP +[backend/command](https://rclone.org/rc/#backend-command). + +### get + Get command for fetching the drive config parameters -.IP -.nf -\f[C] -rclone backend get remote: [options] [+] -\f[R] -.fi -.PP -This is a get command which will be used to fetch the various drive -config parameters -.PP + + rclone backend get remote: [options] [+] + +This is a get command which will be used to fetch the various drive config parameters + Usage Examples: -.IP -.nf -\f[C] -rclone backend get drive: [-o service_account_file] [-o chunk_size] -rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] -\f[R] -.fi -.PP + + rclone backend get drive: [-o service_account_file] [-o chunk_size] + rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] + + Options: -.IP \[bu] 2 -\[dq]chunk_size\[dq]: show the current upload chunk size -.IP \[bu] 2 -\[dq]service_account_file\[dq]: show the current service account file -.SS set -.PP + +- \[dq]chunk_size\[dq]: show the current upload chunk size +- \[dq]service_account_file\[dq]: show the current service account file + +### set + Set command for updating the drive config parameters -.IP -.nf -\f[C] -rclone backend set remote: [options] [+] -\f[R] -.fi -.PP -This is a set command which will be used to update the various drive -config parameters -.PP + + rclone backend set remote: [options] [+] + +This is a set command which will be used to update the various drive config parameters + Usage Examples: -.IP -.nf -\f[C] -rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] -rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] -\f[R] -.fi -.PP + + rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] + rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] + + Options: -.IP \[bu] 2 -\[dq]chunk_size\[dq]: update the current upload chunk size -.IP \[bu] 2 -\[dq]service_account_file\[dq]: update the current service account file -.SS shortcut -.PP + +- \[dq]chunk_size\[dq]: update the current upload chunk size +- \[dq]service_account_file\[dq]: update the current service account file + +### shortcut + Create shortcuts from files or directories -.IP -.nf -\f[C] -rclone backend shortcut remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend shortcut remote: [options] [+] + This command creates shortcuts from files or directories. -.PP + Usage: -.IP -.nf -\f[C] -rclone backend shortcut drive: source_item destination_shortcut -rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut -\f[R] -.fi -.PP -In the first example this creates a shortcut from the -\[dq]source_item\[dq] which can be a file or a directory to the -\[dq]destination_shortcut\[dq]. -The \[dq]source_item\[dq] and the \[dq]destination_shortcut\[dq] should -be relative paths from \[dq]drive:\[dq] -.PP -In the second example this creates a shortcut from the -\[dq]source_item\[dq] relative to \[dq]drive:\[dq] to the -\[dq]destination_shortcut\[dq] relative to \[dq]drive2:\[dq]. -This may fail with a permission error if the user authenticated with -\[dq]drive2:\[dq] can\[aq]t read files from \[dq]drive:\[dq]. -.PP + + rclone backend shortcut drive: source_item destination_shortcut + rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut + +In the first example this creates a shortcut from the \[dq]source_item\[dq] +which can be a file or a directory to the \[dq]destination_shortcut\[dq]. The +\[dq]source_item\[dq] and the \[dq]destination_shortcut\[dq] should be relative paths +from \[dq]drive:\[dq] + +In the second example this creates a shortcut from the \[dq]source_item\[dq] +relative to \[dq]drive:\[dq] to the \[dq]destination_shortcut\[dq] relative to +\[dq]drive2:\[dq]. This may fail with a permission error if the user +authenticated with \[dq]drive2:\[dq] can\[aq]t read files from \[dq]drive:\[dq]. + + Options: -.IP \[bu] 2 -\[dq]target\[dq]: optional target remote for the shortcut destination -.SS drives -.PP + +- \[dq]target\[dq]: optional target remote for the shortcut destination + +### drives + List the Shared Drives available to this account -.IP -.nf -\f[C] -rclone backend drives remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend drives remote: [options] [+] + This command lists the Shared Drives (Team Drives) available to this account. -.PP + Usage: -.IP -.nf -\f[C] -rclone backend [-o config] drives drive: -\f[R] -.fi -.PP + + rclone backend [-o config] drives drive: + This will return a JSON list of objects like this -.IP -.nf -\f[C] -[ - { - \[dq]id\[dq]: \[dq]0ABCDEF-01234567890\[dq], - \[dq]kind\[dq]: \[dq]drive#teamDrive\[dq], - \[dq]name\[dq]: \[dq]My Drive\[dq] - }, - { - \[dq]id\[dq]: \[dq]0ABCDEFabcdefghijkl\[dq], - \[dq]kind\[dq]: \[dq]drive#teamDrive\[dq], - \[dq]name\[dq]: \[dq]Test Drive\[dq] - } -] -\f[R] -.fi -.PP + + [ + { + \[dq]id\[dq]: \[dq]0ABCDEF-01234567890\[dq], + \[dq]kind\[dq]: \[dq]drive#teamDrive\[dq], + \[dq]name\[dq]: \[dq]My Drive\[dq] + }, + { + \[dq]id\[dq]: \[dq]0ABCDEFabcdefghijkl\[dq], + \[dq]kind\[dq]: \[dq]drive#teamDrive\[dq], + \[dq]name\[dq]: \[dq]Test Drive\[dq] + } + ] + With the -o config parameter it will output the list in a format -suitable for adding to a config file to make aliases for all the drives -found and a combined drive. -.IP -.nf -\f[C] -[My Drive] -type = alias -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: +suitable for adding to a config file to make aliases for all the +drives found and a combined drive. -[Test Drive] -type = alias -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: -[AllDrives] -type = combine -upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] -\f[R] -.fi -.PP -Adding this to the rclone config file will cause those team drives to be -accessible with the aliases shown. -Any illegal characters will be substituted with \[dq]_\[dq] and -duplicate names will have numbers suffixed. + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] + +Adding this to the rclone config file will cause those team drives to +be accessible with the aliases shown. Any illegal characters will be +substituted with \[dq]_\[dq] and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree. -.SS untrash -.PP + + +### untrash + Untrash files and directories -.IP -.nf -\f[C] -rclone backend untrash remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend untrash remote: [options] [+] + This command untrashes all the files and directories in the directory passed in recursively. -.PP + Usage: -.PP -This takes an optional directory to trash which make this easier to use -via the API. -.IP -.nf -\f[C] -rclone backend untrash drive:directory -rclone backend --interactive untrash drive:directory subdir -\f[R] -.fi -.PP -Use the --interactive/-i or --dry-run flag to see what would be restored -before restoring it. -.PP + +This takes an optional directory to trash which make this easier to +use via the API. + + rclone backend untrash drive:directory + rclone backend --interactive untrash drive:directory subdir + +Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. + Result: -.IP -.nf -\f[C] -{ - \[dq]Untrashed\[dq]: 17, - \[dq]Errors\[dq]: 0 -} -\f[R] -.fi -.SS copyid -.PP + + { + \[dq]Untrashed\[dq]: 17, + \[dq]Errors\[dq]: 0 + } + + +### copyid + Copy files by ID -.IP -.nf -\f[C] -rclone backend copyid remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend copyid remote: [options] [+] + This command copies files by ID -.PP + Usage: -.IP -.nf -\f[C] -rclone backend copyid drive: ID path -rclone backend copyid drive: ID1 path1 ID2 path2 -\f[R] -.fi -.PP + + rclone backend copyid drive: ID path + rclone backend copyid drive: ID1 path1 ID2 path2 + It copies the drive file with ID given to the path (an rclone path which -will be passed internally to rclone copyto). -The ID and path pairs can be repeated. -.PP -The path should end with a / to indicate copy the file as named to this -directory. -If it doesn\[aq]t end with a / then the last path component will be used -as the file name. -.PP +will be passed internally to rclone copyto). The ID and path pairs can be +repeated. + +The path should end with a / to indicate copy the file as named to +this directory. If it doesn\[aq]t end with a / then the last path +component will be used as the file name. + If the destination is a drive backend then server-side copying will be attempted if possible. -.PP -Use the --interactive/-i or --dry-run flag to see what would be copied -before copying. -.SS exportformats -.PP + +Use the --interactive/-i or --dry-run flag to see what would be copied before copying. + + +### exportformats + Dump the export formats for debug purposes -.IP -.nf -\f[C] -rclone backend exportformats remote: [options] [+] -\f[R] -.fi -.SS importformats -.PP + + rclone backend exportformats remote: [options] [+] + +### importformats + Dump the import formats for debug purposes -.IP -.nf -\f[C] -rclone backend importformats remote: [options] [+] -\f[R] -.fi -.SS Limitations -.PP -Drive has quite a lot of rate limiting. -This causes rclone to be limited to transferring about 2 files per -second only. -Individual files may be transferred much faster at 100s of MiB/s but -lots of small files can take a long time. -.PP -Server side copies are also subject to a separate rate limit. -If you see User rate limit exceeded errors, wait at least 24 hours and -retry. -You can disable server-side copies with \f[C]--disable copy\f[R] to -download and upload the files if you prefer. -.SS Limitations of Google Docs -.PP -Google docs will appear as size -1 in \f[C]rclone ls\f[R], -\f[C]rclone ncdu\f[R] etc, and as size 0 in anything which uses the VFS -layer, e.g. -\f[C]rclone mount\f[R] and \f[C]rclone serve\f[R]. -When calculating directory totals, e.g. -in \f[C]rclone size\f[R] and \f[C]rclone ncdu\f[R], they will be counted -in as empty files. -.PP + + rclone backend importformats remote: [options] [+] + + + +## Limitations + +Drive has quite a lot of rate limiting. This causes rclone to be +limited to transferring about 2 files per second only. Individual +files may be transferred much faster at 100s of MiB/s but lots of +small files can take a long time. + +Server side copies are also subject to a separate rate limit. If you +see User rate limit exceeded errors, wait at least 24 hours and retry. +You can disable server-side copies with \[ga]--disable copy\[ga] to download +and upload the files if you prefer. + +### Limitations of Google Docs + +Google docs will appear as size -1 in \[ga]rclone ls\[ga], \[ga]rclone ncdu\[ga] etc, +and as size 0 in anything which uses the VFS layer, e.g. \[ga]rclone mount\[ga] +and \[ga]rclone serve\[ga]. When calculating directory totals, e.g. in +\[ga]rclone size\[ga] and \[ga]rclone ncdu\[ga], they will be counted in as empty +files. + This is because rclone can\[aq]t find out the size of the Google docs without downloading them. -.PP -Google docs will transfer correctly with \f[C]rclone sync\f[R], -\f[C]rclone copy\f[R] etc as rclone knows to ignore the size when doing -the transfer. -.PP + +Google docs will transfer correctly with \[ga]rclone sync\[ga], \[ga]rclone copy\[ga] +etc as rclone knows to ignore the size when doing the transfer. + However an unfortunate consequence of this is that you may not be able -to download Google docs using \f[C]rclone mount\f[R]. -If it doesn\[aq]t work you will get a 0 sized file. -If you try again the doc may gain its correct size and be downloadable. -Whether it will work on not depends on the application accessing the -mount and the OS you are running - experiment to find out if it does -work for you! -.SS Duplicated files -.PP +to download Google docs using \[ga]rclone mount\[ga]. If it doesn\[aq]t work you +will get a 0 sized file. If you try again the doc may gain its +correct size and be downloadable. Whether it will work on not depends +on the application accessing the mount and the OS you are running - +experiment to find out if it does work for you! + +### Duplicated files + Sometimes, for no reason I\[aq]ve been able to track down, drive will -duplicate a file that rclone uploads. -Drive unlike all the other remotes can have duplicated files. -.PP +duplicate a file that rclone uploads. Drive unlike all the other +remotes can have duplicated files. + Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. -.PP -Use \f[C]rclone dedupe\f[R] to fix duplicated files. -.PP -Note that this isn\[aq]t just a problem with rclone, even Google Photos -on Android duplicates files on drive sometimes. -.SS Rclone appears to be re-copying files it shouldn\[aq]t -.PP + +Use \[ga]rclone dedupe\[ga] to fix duplicated files. + +Note that this isn\[aq]t just a problem with rclone, even Google Photos on +Android duplicates files on drive sometimes. + +### Rclone appears to be re-copying files it shouldn\[aq]t + The most likely cause of this is the duplicated file issue above - run -\f[C]rclone dedupe\f[R] and check your logs for duplicate object or -directory messages. -.PP -This can also be caused by a delay/caching on google drive\[aq]s end -when comparing directory listings. -Specifically with team drives used in combination with --fast-list. -Files that were uploaded recently may not appear on the directory list -sent to rclone when using --fast-list. -.PP +\[ga]rclone dedupe\[ga] and check your logs for duplicate object or directory +messages. + +This can also be caused by a delay/caching on google drive\[aq]s end when +comparing directory listings. Specifically with team drives used in +combination with --fast-list. Files that were uploaded recently may +not appear on the directory list sent to rclone when using --fast-list. + Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem. -.SS Making your own client_id -.PP + +## Making your own client_id + When you use rclone with Google drive in its default configuration you -are using rclone\[aq]s client_id. -This is shared between all the rclone users. -There is a global rate limit on the number of queries per second that -each client_id can do set by Google. -rclone already has a high quota and I will continue to make sure it is -high enough by contacting Google. -.PP -It is strongly recommended to use your own client ID as the default -rclone ID is heavily used. -If you have multiple services running, it is recommended to use an API -key for each service. -The default Google quota is 10 transactions per second so it is -recommended to stay under that number as if you use more than that, it -will cause rclone to rate limit and make things slower. -.PP +are using rclone\[aq]s client_id. This is shared between all the rclone +users. There is a global rate limit on the number of queries per +second that each client_id can do set by Google. rclone already has a +high quota and I will continue to make sure it is high enough by +contacting Google. + +It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. + Here is how to create your own Google Drive client ID for rclone: -.IP " 1." 4 -Log into the Google API Console (https://console.developers.google.com/) -with your Google account. -It doesn\[aq]t matter what Google account you use. -(It need not be the same account as the Google Drive you want to access) -.IP " 2." 4 -Select a project or create a new project. -.IP " 3." 4 -Under \[dq]ENABLE APIS AND SERVICES\[dq] search for \[dq]Drive\[dq], and -enable the \[dq]Google Drive API\[dq]. -.IP " 4." 4 -Click \[dq]Credentials\[dq] in the left-side panel (not \[dq]Create -credentials\[dq], which opens the wizard), then \[dq]Create -credentials\[dq] -.IP " 5." 4 -If you already configured an \[dq]Oauth Consent Screen\[dq], then skip -to the next step; if not, click on \[dq]CONFIGURE CONSENT SCREEN\[dq] -button (near the top right corner of the right panel), then select -\[dq]External\[dq] and click on \[dq]CREATE\[dq]; on the next screen, -enter an \[dq]Application name\[dq] (\[dq]rclone\[dq] is OK); enter -\[dq]User Support Email\[dq] (your own email is OK); enter -\[dq]Developer Contact Email\[dq] (your own email is OK); then click on -\[dq]Save\[dq] (all other data is optional). -You will also have to add some scopes, including \f[C].../auth/docs\f[R] -and \f[C].../auth/drive\f[R] in order to be able to edit, create and -delete files with RClone. -You may also want to include the -\f[C]../auth/drive.metadata.readonly\f[R] scope. -After adding scopes, click \[dq]Save and continue\[dq] to add test -users. -Be sure to add your own account to the test users. -Once you\[aq]ve added yourself as a test user and saved the changes, -click again on \[dq]Credentials\[dq] on the left panel to go back to the -\[dq]Credentials\[dq] screen. -.RS 4 -.PP -(PS: if you are a GSuite user, you could also select \[dq]Internal\[dq] -instead of \[dq]External\[dq] above, but this will restrict API use to -Google Workspace users in your organisation). -.RE -.IP " 6." 4 -Click on the \[dq]+ CREATE CREDENTIALS\[dq] button at the top of the -screen, then select \[dq]OAuth client ID\[dq]. -.IP " 7." 4 -Choose an application type of \[dq]Desktop app\[dq] and click -\[dq]Create\[dq]. -(the default name is fine) -.IP " 8." 4 -It will show you a client ID and client secret. -Make a note of these. -.RS 4 -.PP -(If you selected \[dq]External\[dq] at Step 5 continue to Step 9. -If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can -skip straight to Step 10 but your destination drive must be part of the -same Google Workspace.) -.RE -.IP " 9." 4 -Go to \[dq]Oauth consent screen\[dq] and then click \[dq]PUBLISH -APP\[dq] button and confirm. -You will also want to add yourself as a test user. -.IP "10." 4 -Provide the noted client ID and client secret to rclone. -.PP -Be aware that, due to the \[dq]enhanced security\[dq] recently -introduced by Google, you are theoretically expected to \[dq]submit your -app for verification\[dq] and then wait a few weeks(!) for their -response; in practice, you can go right ahead and use the client ID and -client secret with rclone, the only issue will be a very scary -confirmation screen shown when you connect via your browser for rclone -to be able to get its token-id (but as this only happens during the -remote configuration, it\[aq]s not such a big deal). -Keeping the application in \[dq]Testing\[dq] will work as well, but the -limitation is that any grants will expire after a week, which can be -annoying to refresh constantly. -If, for whatever reason, a short grant time is not a problem, then -keeping the application in testing mode would also be sufficient. -.PP + +1. Log into the [Google API +Console](https://console.developers.google.com/) with your Google +account. It doesn\[aq]t matter what Google account you use. (It need not +be the same account as the Google Drive you want to access) + +2. Select a project or create a new project. + +3. Under \[dq]ENABLE APIS AND SERVICES\[dq] search for \[dq]Drive\[dq], and enable the +\[dq]Google Drive API\[dq]. + +4. Click \[dq]Credentials\[dq] in the left-side panel (not \[dq]Create +credentials\[dq], which opens the wizard). + +5. If you already configured an \[dq]Oauth Consent Screen\[dq], then skip +to the next step; if not, click on \[dq]CONFIGURE CONSENT SCREEN\[dq] button +(near the top right corner of the right panel), then select \[dq]External\[dq] +and click on \[dq]CREATE\[dq]; on the next screen, enter an \[dq]Application name\[dq] +(\[dq]rclone\[dq] is OK); enter \[dq]User Support Email\[dq] (your own email is OK); +enter \[dq]Developer Contact Email\[dq] (your own email is OK); then click on +\[dq]Save\[dq] (all other data is optional). You will also have to add some scopes, +including \[ga].../auth/docs\[ga] and \[ga].../auth/drive\[ga] in order to be able to edit, +create and delete files with RClone. You may also want to include the +\[ga]../auth/drive.metadata.readonly\[ga] scope. After adding scopes, click +\[dq]Save and continue\[dq] to add test users. Be sure to add your own account to +the test users. Once you\[aq]ve added yourself as a test user and saved the +changes, click again on \[dq]Credentials\[dq] on the left panel to go back to +the \[dq]Credentials\[dq] screen. + + (PS: if you are a GSuite user, you could also select \[dq]Internal\[dq] instead +of \[dq]External\[dq] above, but this will restrict API use to Google Workspace +users in your organisation). + +6. Click on the \[dq]+ CREATE CREDENTIALS\[dq] button at the top of the screen, +then select \[dq]OAuth client ID\[dq]. + +7. Choose an application type of \[dq]Desktop app\[dq] and click \[dq]Create\[dq]. (the default name is fine) + +8. It will show you a client ID and client secret. Make a note of these. + + (If you selected \[dq]External\[dq] at Step 5 continue to Step 9. + If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can skip straight to + Step 10 but your destination drive must be part of the same Google Workspace.) + +9. Go to \[dq]Oauth consent screen\[dq] and then click \[dq]PUBLISH APP\[dq] button and confirm. + You will also want to add yourself as a test user. + +10. Provide the noted client ID and client secret to rclone. + +Be aware that, due to the \[dq]enhanced security\[dq] recently introduced by +Google, you are theoretically expected to \[dq]submit your app for verification\[dq] +and then wait a few weeks(!) for their response; in practice, you can go right +ahead and use the client ID and client secret with rclone, the only issue will +be a very scary confirmation screen shown when you connect via your browser +for rclone to be able to get its token-id (but as this only happens during +the remote configuration, it\[aq]s not such a big deal). Keeping the application in +\[dq]Testing\[dq] will work as well, but the limitation is that any grants will expire +after a week, which can be annoying to refresh constantly. If, for whatever +reason, a short grant time is not a problem, then keeping the application in +testing mode would also be sufficient. + (Thanks to \[at]balazer on github for these instructions.) -.PP -Sometimes, creation of an OAuth consent in Google API Console fails due -to an error message \[lq]The request failed because changes to one of -the field of the resource is not supported\[rq]. -As a convenient workaround, the necessary Google Drive API key can be -created on the Python -Quickstart (https://developers.google.com/drive/api/v3/quickstart/python) -page. -Just push the Enable the Drive API button to receive the Client ID and -Secret. + +Sometimes, creation of an OAuth consent in Google API Console fails due to an error message +\[lq]The request failed because changes to one of the field of the resource is not supported\[rq]. +As a convenient workaround, the necessary Google Drive API key can be created on the +[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. +Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. -.SH Google Photos -.PP -The rclone backend for Google -Photos (https://www.google.com/photos/about/) is a specialized backend -for transferring photos and videos to and from Google Photos. -.PP -\f[B]NB\f[R] The Google Photos API which rclone uses has quite a few -limitations, so please read the limitations section carefully to make -sure it is suitable for your use. -.SS Configuration -.PP -The initial setup for google cloud storage involves getting a token from -Google Photos which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config + +# Google Photos + +The rclone backend for [Google Photos](https://www.google.com/photos/about/) is +a specialized backend for transferring photos and videos to and from +Google Photos. + +**NB** The Google Photos API which rclone uses has quite a few +limitations, so please read the [limitations section](#limitations) +carefully to make sure it is suitable for your use. + +## Configuration + +The initial setup for google cloud storage involves getting a token from Google Photos +which you need to do in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Google Photos - \[rs] \[dq]google photos\[dq] -[snip] -Storage> google photos -** See help for google photos backend at: https://rclone.org/googlephotos/ ** - -Google Application Client Id -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -client_id> -Google Application Client Secret -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -client_secret> -Set to make the Google Photos backend read only. - +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / Google +Photos \ \[dq]google photos\[dq] [snip] Storage> google photos ** See +help for google photos backend at: https://rclone.org/googlephotos/ ** +.PP +Google Application Client Id Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +client_id> Google Application Client Secret Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +client_secret> Set to make the Google Photos backend read only. +.PP If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. -Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). -read_only> -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... +Enter a boolean value (true or false). +Press Enter for the default (\[dq]false\[dq]). +read_only> Edit advanced config? +(y/n) y) Yes n) No y/n> n Remote config Use web browser to automatically +authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... Got code - -*** IMPORTANT: All media items uploaded to Google Photos with rclone -*** are stored in full resolution at original quality. These uploads -*** will count towards storage in your Google Account. - --------------------- -[remote] -type = google photos -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2019-06-28T17:38:04.644930156+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi .PP +*** IMPORTANT: All media items uploaded to Google Photos with rclone *** +are stored in full resolution at original quality. +These uploads *** will count towards storage in your Google Account. +.PP +.TS +tab(@); +lw(20.4n). +T{ +[remote] type = google photos token = +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2019-06-28T17:38:04.644930156+01:00\[dq]} +T} +_ +T{ +y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +\[ga]\[ga]\[ga] +T} +T{ See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. -.PP +T} +T{ Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. @@ -42632,46 +43597,43 @@ get back the verification code. This is on \f[C]http://127.0.0.1:53682/\f[R] and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode. -.PP +T} +T{ This remote is called \f[C]remote\f[R] and can now be used like this -.PP +T} +T{ See all the albums in your photos -.IP -.nf -\f[C] +T} +T{ rclone lsd remote:album -\f[R] -.fi -.PP +T} +T{ Make a new album -.IP -.nf -\f[C] +T} +T{ rclone mkdir remote:album/newAlbum -\f[R] -.fi -.PP +T} +T{ List the contents of an album -.IP -.nf -\f[C] +T} +T{ rclone ls remote:album/newAlbum -\f[R] -.fi -.PP +T} +T{ Sync \f[C]/home/local/images\f[R] to the Google Photos, removing any excess files in the album. -.IP -.nf -\f[C] +T} +T{ rclone sync --interactive /home/local/image remote:album/newAlbum -\f[R] -.fi -.SS Layout -.PP +T} +T{ +### Layout +T} +T{ As Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it. -.PP +T} +T{ The directories under \f[C]media\f[R] show different ways of categorizing the media. Each file will appear multiple times. @@ -42679,65 +43641,20 @@ So if you want to make a backup of your google photos you might choose to backup \f[C]remote:media/by-month\f[R]. (\f[B]NB\f[R] \f[C]remote:media/by-day\f[R] is rather slow at the moment so avoid for syncing.) -.PP +T} +T{ Note that all your photos and videos will appear somewhere under \f[C]media\f[R], but they may not appear under \f[C]album\f[R] unless you\[aq]ve put them into albums. -.IP -.nf -\f[C] -/ -- upload - - file1.jpg - - file2.jpg - - ... -- media - - all - - file1.jpg - - file2.jpg - - ... - - by-year - - 2000 - - file1.jpg - - ... - - 2001 - - file2.jpg - - ... - - ... - - by-month - - 2000 - - 2000-01 - - file1.jpg - - ... - - 2000-02 - - file2.jpg - - ... - - ... - - by-day - - 2000 - - 2000-01-01 - - file1.jpg - - ... - - 2000-01-02 - - file2.jpg - - ... - - ... -- album - - album name - - album name/sub -- shared-album - - album name - - album name/sub -- feature - - favorites - - file1.jpg - - file2.jpg -\f[R] -.fi -.PP +T} +T{ +\f[C]/ - upload - file1.jpg - file2.jpg - ... - media - all - file1.jpg - file2.jpg - ... - by-year - 2000 - file1.jpg - ... - 2001 - file2.jpg - ... - ... - by-month - 2000 - 2000-01 - file1.jpg - ... - 2000-02 - file2.jpg - ... - ... - by-day - 2000 - 2000-01-01 - file1.jpg - ... - 2000-01-02 - file2.jpg - ... - ... - album - album name - album name/sub - shared-album - album name - album name/sub - feature - favorites - file1.jpg - file2.jpg\f[R] +T} +T{ There are two writable parts of the tree, the \f[C]upload\f[R] directory and sub directories of the \f[C]album\f[R] directory. -.PP +T} +T{ The \f[C]upload\f[R] directory is for uploading files you don\[aq]t want to put into albums. This will be empty to start with and will contain the files you\[aq]ve @@ -42746,279 +43663,292 @@ restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to \f[C]album\f[R] will work better. -.PP +T} +T{ Directories within the \f[C]album\f[R] directory are also writeable and you may create new directories (albums) under \f[C]album\f[R]. If you copy files with a directory hierarchy in there then rclone will create albums with the \f[C]/\f[R] character in them. For example if you do -.IP -.nf -\f[C] +T} +T{ rclone copy /path/to/images remote:album/images -\f[R] -.fi -.PP +T} +T{ and the images directory contains -.IP -.nf -\f[C] -images - - file1.jpg - dir - file2.jpg - dir2 - dir3 - file3.jpg -\f[R] -.fi -.PP +T} +T{ +\f[C]images - file1.jpg dir file2.jpg dir2 dir3 file3.jpg\f[R] +T} +T{ Then rclone will create the following albums with the following files in -.IP \[bu] 2 -images -.RS 2 -.IP \[bu] 2 -file1.jpg -.RE -.IP \[bu] 2 -images/dir -.RS 2 -.IP \[bu] 2 -file2.jpg -.RE -.IP \[bu] 2 -images/dir2/dir3 -.RS 2 -.IP \[bu] 2 +T} +T{ +- images - file1.jpg - images/dir - file2.jpg - images/dir2/dir3 - file3.jpg -.RE -.PP +T} +T{ This means that you can use the \f[C]album\f[R] path pretty much like a normal filesystem and it is a good target for repeated syncing. -.PP +T} +T{ The \f[C]shared-album\f[R] directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. -.SS Standard options -.PP +T} +T{ +### Standard options +T} +T{ Here are the Standard options specific to google photos (Google Photos). -.SS --gphotos-client-id -.PP +T} +T{ +#### --gphotos-client-id +T} +T{ OAuth Client Id. -.PP +T} +T{ Leave blank normally. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 +T} +T{ +- Config: client_id - Env Var: RCLONE_GPHOTOS_CLIENT_ID - Type: string - Required: false -.SS --gphotos-client-secret -.PP +T} +T{ +#### --gphotos-client-secret +T} +T{ OAuth Client Secret. -.PP +T} +T{ Leave blank normally. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --gphotos-read-only -.PP +T} +T{ +- Config: client_secret - Env Var: RCLONE_GPHOTOS_CLIENT_SECRET - Type: +string - Required: false +T} +T{ +#### --gphotos-read-only +T} +T{ Set to make the Google Photos backend read only. -.PP +T} +T{ If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: read_only -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_READ_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 +T} +T{ +- Config: read_only - Env Var: RCLONE_GPHOTOS_READ_ONLY - Type: bool - Default: false -.SS Advanced options -.PP +T} +T{ +### Advanced options +T} +T{ Here are the Advanced options specific to google photos (Google Photos). -.SS --gphotos-token -.PP +T} +T{ +#### --gphotos-token +T} +T{ OAuth Access Token as a JSON blob. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 +T} +T{ +- Config: token - Env Var: RCLONE_GPHOTOS_TOKEN - Type: string - Required: false -.SS --gphotos-auth-url -.PP +T} +T{ +#### --gphotos-auth-url +T} +T{ Auth server URL. -.PP +T} +T{ Leave blank to use the provider defaults. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 +T} +T{ +- Config: auth_url - Env Var: RCLONE_GPHOTOS_AUTH_URL - Type: string - Required: false -.SS --gphotos-token-url -.PP +T} +T{ +#### --gphotos-token-url +T} +T{ Token server url. -.PP +T} +T{ Leave blank to use the provider defaults. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 +T} +T{ +- Config: token_url - Env Var: RCLONE_GPHOTOS_TOKEN_URL - Type: string - Required: false -.SS --gphotos-read-size -.PP +T} +T{ +#### --gphotos-read-size +T} +T{ Set to read the size of media items. -.PP +T} +T{ Normally rclone does not read the size of media items since this takes another transaction. This isn\[aq]t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: read_size -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_READ_SIZE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 +T} +T{ +- Config: read_size - Env Var: RCLONE_GPHOTOS_READ_SIZE - Type: bool - Default: false -.SS --gphotos-start-year -.PP +T} +T{ +#### --gphotos-start-year +T} +T{ Year limits the photos to be downloaded to those which are uploaded after the given year. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: start_year -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_START_YEAR -.IP \[bu] 2 -Type: int -.IP \[bu] 2 +T} +T{ +- Config: start_year - Env Var: RCLONE_GPHOTOS_START_YEAR - Type: int - Default: 2000 -.SS --gphotos-include-archived -.PP +T} +T{ +#### --gphotos-include-archived +T} +T{ Also view and download archived media. -.PP +T} +T{ By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred. -.PP +T} +T{ Note that media in albums is always visible and synced, no matter their archive status. -.PP +T} +T{ With this flag, archived media are always visible in directory listings and transferred. -.PP +T} +T{ Without this flag, archived media will not be visible in directory listings and won\[aq]t be transferred. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: include_archived -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --gphotos-encoding -.PP +T} +T{ +- Config: include_archived - Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED - +Type: bool - Default: false +T} +T{ +#### --gphotos-encoding +T} +T{ The encoding for the backend. -.PP +T} +T{ See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_GPHOTOS_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,CrLf,InvalidUtf8,Dot -.SS Limitations -.PP +T} +T{ +- Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: +MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot +T} +T{ +## Limitations +T} +T{ Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn\[aq]t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. -.PP +T} +T{ Note that all media items uploaded to Google Photos through the API are stored in full resolution at \[dq]original quality\[dq] and \f[B]will\f[R] count towards your storage quota in your Google Account. The API does \f[B]not\f[R] offer a way to upload in \[dq]high quality\[dq] mode.. -.PP +T} +T{ \f[C]rclone about\f[R] is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. -.PP +T} +T{ See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) See rclone about (https://rclone.org/commands/rclone_about/) -.SS Downloading Images -.PP +T} +T{ +### Downloading Images +T} +T{ When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115 (https://issuetracker.google.com/issues/112096115). -.PP +T} +T{ \f[B]The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on \[dq]Google Photos\[dq] as a backup of your photos. You will not be able to use rclone to redownload original images. You could use \[aq]google takeout\[aq] to recover the original photos as a last resort\f[R] -.SS Downloading Videos -.PP +T} +T{ +### Downloading Videos +T} +T{ When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044 (https://issuetracker.google.com/issues/113672044). -.SS Duplicates -.PP +T} +T{ +### Duplicates +T} +T{ If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called \f[C]file.jpg\f[R] would then appear as \f[C]file {123456}.jpg\f[R] and \f[C]file {ABCDEF}.jpg\f[R] (the actual IDs are a lot longer alas!). -.PP +T} +T{ If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may @@ -43028,8492 +43958,6519 @@ the same image to \f[C]album/my_album\f[R] the filename of the image in \f[C]album/my_album\f[R] will be what it was uploaded with initially, not what you uploaded it with to \f[C]album\f[R]. In practise this shouldn\[aq]t cause too many problems. -.SS Modified time -.PP +T} +T{ +### Modified time +T} +T{ The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. -.PP +T} +T{ This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. -.SS Size -.PP +T} +T{ +### Size +T} +T{ The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. -.PP +T} +T{ It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is \f[B]very slow\f[R] and uses up a lot of transactions. This can be enabled with the \f[C]--gphotos-read-size\f[R] option or the \f[C]read_size = true\f[R] config parameter. -.PP +T} +T{ If you want to use the backend with \f[C]rclone mount\f[R] you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You\[aq]ll need to experiment to see if it works for you without the flag. -.SS Albums -.PP +T} +T{ +### Albums +T} +T{ Rclone can only upload files to albums it created. This is a limitation of the Google Photos API (https://developers.google.com/photos/library/guides/manage-albums). -.PP +T} +T{ Rclone can remove files it uploaded from albums it created only. -.SS Deleting files -.PP +T} +T{ +### Deleting files +T} +T{ Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781 (https://issuetracker.google.com/issues/109759781). -.PP +T} +T{ Rclone cannot delete files anywhere except under \f[C]album\f[R]. -.SS Deleting albums -.PP +T} +T{ +### Deleting albums +T} +T{ The Google Photos API does not support deleting albums - see bug #135714733 (https://issuetracker.google.com/issues/135714733). -.SH Hasher -.PP +T} +T{ +# Hasher +T} +T{ Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It\[aq]s main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files -.SS Getting started -.PP +T} +T{ +## Getting started +T} +T{ To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working. -.PP +T} +T{ Let\[aq]s call the base remote \f[C]myRemote:path\f[R] here. Note that anything inside \f[C]myRemote:path\f[R] will be handled by hasher and anything outside won\[aq]t. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote \f[C]s3:bucket\f[R]. -.PP -Now proceed to interactive or manual configuration. -.SS Interactive configuration -.PP -Run \f[C]rclone config\f[R]: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> Hasher1 -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Handle checksums for other remotes - \[rs] \[dq]hasher\[dq] -[snip] -Storage> hasher -Remote to cache checksums for, like myremote:mypath. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -remote> myRemote:path -Comma separated list of supported checksum types. -Enter a string value. Press Enter for the default (\[dq]md5,sha1\[dq]). -hashsums> md5 -Maximum time to keep checksums in cache. 0 = no cache, off = cache forever. -max_age> off -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config --------------------- -[Hasher1] -type = hasher -remote = myRemote:path -hashsums = md5 -max_age = off --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS Manual configuration -.PP -Run \f[C]rclone config path\f[R] to see the path of current active -config file, usually \f[C]YOURHOME/.config/rclone/rclone.conf\f[R]. -Open it in your favorite text editor, find section for the base remote -and create new section for hasher like in the following examples: -.IP -.nf -\f[C] -[Hasher1] -type = hasher -remote = myRemote:path -hashes = md5 -max_age = off - -[Hasher2] -type = hasher -remote = /local/path -hashes = dropbox,sha1 -max_age = 24h -\f[R] -.fi -.PP -Hasher takes basically the following parameters: - \f[C]remote\f[R] is -required, - \f[C]hashes\f[R] is a comma separated list of supported -checksums (by default \f[C]md5,sha1\f[R]), - \f[C]max_age\f[R] - maximum -time to keep a checksum value in the cache, \f[C]0\f[R] will disable -caching completely, \f[C]off\f[R] will cache \[dq]forever\[dq] (that is -until the files get changed). -.PP -Make sure the \f[C]remote\f[R] has \f[C]:\f[R] (colon) in. -If you specify the remote without a colon then rclone will use a local -directory of that name. -So if you use a remote of \f[C]/local/path\f[R] then rclone will handle -hashes for that directory. -If you use \f[C]remote = name\f[R] literally then rclone will put files -\f[B]in a directory called \f[CB]name\f[B] located under current -directory\f[R]. -.SS Usage -.SS Basic operations -.PP -Now you can use it as \f[C]Hasher2:subdir/file\f[R] instead of base -remote. -Hasher will transparently update cache with new checksums when a file is -fully read or overwritten, like: -.IP -.nf -\f[C] -rclone copy External:path/file Hasher:dest/path - -rclone cat Hasher:path/to/file > /dev/null -\f[R] -.fi -.PP -The way to refresh \f[B]all\f[R] cached checksums (even unsupported by -the base backend) for a subtree is to \f[B]re-download\f[R] all files in -the subtree. -For example, use \f[C]hashsum --download\f[R] using \f[B]any\f[R] -supported hashsum on the command line (we just care to re-read): -.IP -.nf -\f[C] -rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null - -rclone backend dump Hasher:path/to/subtree -\f[R] -.fi -.PP -You can print or drop hashsum cache using custom backend commands: -.IP -.nf -\f[C] -rclone backend dump Hasher:dir/subdir - -rclone backend drop Hasher: -\f[R] -.fi -.SS Pre-Seed from a SUM File -.PP -Hasher supports two backend commands: generic SUM file \f[C]import\f[R] -and faster but less consistent \f[C]stickyimport\f[R]. -.IP -.nf -\f[C] -rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4] -\f[R] -.fi -.PP -Instead of SHA1 it can be any hash supported by the remote. -The last argument can point to either a local or an -\f[C]other-remote:path\f[R] text file in SUM format. -The command will parse the SUM file, then walk down the path given by -the first argument, snapshot current fingerprints and fill in the cache -entries correspondingly. -- Paths in the SUM file are treated as relative to -\f[C]hasher:dir/subdir\f[R]. -- The command will \f[B]not\f[R] check that supplied values are correct. -You \f[B]must know\f[R] what you are doing. -- This is a one-time action. -The SUM file will not get \[dq]attached\[dq] to the remote. -Cache entries can still be overwritten later, should the object\[aq]s -fingerprint change. -- The tree walk can take long depending on the tree size. -You can increase \f[C]--checkers\f[R] to make it faster. -Or use \f[C]stickyimport\f[R] if you don\[aq]t care about fingerprints -and consistency. -.IP -.nf -\f[C] -rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1 -\f[R] -.fi -.PP -\f[C]stickyimport\f[R] is similar to \f[C]import\f[R] but works much -faster because it does not need to stat existing files and skips initial -tree walk. -Instead of binding cache entries to file fingerprints it creates -\f[I]sticky\f[R] entries bound to the file name alone ignoring size, -modification time etc. -Such hash entries can be replaced only by \f[C]purge\f[R], -\f[C]delete\f[R], \f[C]backend drop\f[R] or by full re-read/re-write of -the files. -.SS Configuration reference -.SS Standard options -.PP -Here are the Standard options specific to hasher (Better checksums for -other remotes). -.SS --hasher-remote -.PP -Remote to cache checksums for (e.g. -myRemote:path). -.PP -Properties: -.IP \[bu] 2 -Config: remote -.IP \[bu] 2 -Env Var: RCLONE_HASHER_REMOTE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --hasher-hashes -.PP -Comma separated list of supported checksum types. -.PP -Properties: -.IP \[bu] 2 -Config: hashes -.IP \[bu] 2 -Env Var: RCLONE_HASHER_HASHES -.IP \[bu] 2 -Type: CommaSepList -.IP \[bu] 2 -Default: md5,sha1 -.SS --hasher-max-age -.PP -Maximum time to keep checksums in cache (0 = no cache, off = cache -forever). -.PP -Properties: -.IP \[bu] 2 -Config: max_age -.IP \[bu] 2 -Env Var: RCLONE_HASHER_MAX_AGE -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: off -.SS Advanced options -.PP -Here are the Advanced options specific to hasher (Better checksums for -other remotes). -.SS --hasher-auto-size -.PP -Auto-update checksum for files smaller than this size (disabled by -default). -.PP -Properties: -.IP \[bu] 2 -Config: auto_size -.IP \[bu] 2 -Env Var: RCLONE_HASHER_AUTO_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 0 -.SS Metadata -.PP -Any metadata supported by the underlying remote is read and written. -.PP -See the metadata (https://rclone.org/docs/#metadata) docs for more info. -.SS Backend commands -.PP -Here are the commands specific to the hasher backend. -.PP -Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP -The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP -These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS drop -.PP -Drop cache -.IP -.nf -\f[C] -rclone backend drop remote: [options] [+] -\f[R] -.fi -.PP -Completely drop checksum cache. -Usage Example: rclone backend drop hasher: -.SS dump -.PP -Dump the database -.IP -.nf -\f[C] -rclone backend dump remote: [options] [+] -\f[R] -.fi -.PP -Dump cache records covered by the current remote -.SS fulldump -.PP -Full dump of the database -.IP -.nf -\f[C] -rclone backend fulldump remote: [options] [+] -\f[R] -.fi -.PP -Dump all cache records in the database -.SS import -.PP -Import a SUM file -.IP -.nf -\f[C] -rclone backend import remote: [options] [+] -\f[R] -.fi -.PP -Amend hash cache from a SUM file and bind checksums to files by -size/time. -Usage Example: rclone backend import hasher:subdir md5 /path/to/sum.md5 -.SS stickyimport -.PP -Perform fast import of a SUM file -.IP -.nf -\f[C] -rclone backend stickyimport remote: [options] [+] -\f[R] -.fi -.PP -Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: rclone backend stickyimport hasher:subdir md5 -remote:path/to/sum.md5 -.SS Implementation details (advanced) -.PP -This section explains how various rclone operations work on a hasher -remote. -.PP -\f[B]Disclaimer. This section describes current implementation which can -change in future rclone versions!.\f[R] -.SS Hashsum command -.PP -The \f[C]rclone hashsum\f[R] (or \f[C]md5sum\f[R] or \f[C]sha1sum\f[R]) -command will: -.IP "1." 3 -if requested hash is supported by lower level, just pass it. -.IP "2." 3 -if object size is below \f[C]auto_size\f[R] then download object and -calculate \f[I]requested\f[R] hashes on the fly. -.IP "3." 3 -if unsupported and the size is big enough, build object -\f[C]fingerprint\f[R] (including size, modtime if supported, first-found -\f[I]other\f[R] hash if any). -.IP "4." 3 -if the strict match is found in cache for the requested remote, return -the stored hash. -.IP "5." 3 -if remote found but fingerprint mismatched, then purge the entry and -proceed to step 6. -.IP "6." 3 -if remote not found or had no requested hash type or after step 5: -download object, calculate all \f[I]supported\f[R] hashes on the fly and -store in cache; return requested hash. -.SS Other operations -.IP \[bu] 2 -whenever a file is uploaded or downloaded \f[B]in full\f[R], capture the -stream to calculate all supported hashes on the fly and update database -.IP \[bu] 2 -server-side \f[C]move\f[R] will update keys of existing cache entries -.IP \[bu] 2 -\f[C]deletefile\f[R] will remove a single cache entry -.IP \[bu] 2 -\f[C]purge\f[R] will remove all cache entries under the purged path -.PP -Note that setting \f[C]max_age = 0\f[R] will disable checksum caching -completely. -.PP -If you set \f[C]max_age = off\f[R], checksums in cache will never age, -unless you fully rewrite or delete the file. -.SS Cache storage -.PP -Cached checksums are stored as \f[C]bolt\f[R] database files under -rclone cache directory, usually \f[C]\[ti]/.cache/rclone/kv/\f[R]. -Databases are maintained one per \f[I]base\f[R] backend, named like -\f[C]BaseRemote\[ti]hasher.bolt\f[R]. -Checksums for multiple \f[C]alias\f[R]-es into a single base backend -will be stored in the single database. -All local paths are treated as aliases into the \f[C]local\f[R] backend -(unless encrypted or chunked) and stored in -\f[C]\[ti]/.cache/rclone/kv/local\[ti]hasher.bolt\f[R]. -Databases can be shared between multiple rclone processes. -.SH HDFS -.PP -HDFS (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) -is a distributed file-system, part of the Apache -Hadoop (https://hadoop.apache.org/) framework. -.PP -Paths are specified as \f[C]remote:\f[R] or -\f[C]remote:path/to/dir\f[R]. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[skip] -XX / Hadoop distributed file system - \[rs] \[dq]hdfs\[dq] -[skip] -Storage> hdfs -** See help for hdfs backend at: https://rclone.org/hdfs/ ** - -hadoop name node and port -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Connect to host namenode at port 8020 - \[rs] \[dq]namenode:8020\[dq] -namenode> namenode.hadoop:8020 -hadoop user name -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Connect to hdfs as root - \[rs] \[dq]root\[dq] -username> root -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[remote] -type = hdfs -namenode = namenode.hadoop:8020 -username = root --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -hadoop hdfs - -e) Edit existing remote -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> q -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this -.PP -See all the top level directories -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone ls remote:directory -\f[R] -.fi -.PP -Sync the remote \f[C]directory\f[R] to \f[C]/home/local/directory\f[R], -deleting any excess files. -.IP -.nf -\f[C] -rclone sync --interactive remote:directory /home/local/directory -\f[R] -.fi -.SS Setting up your own HDFS instance for testing -.PP -You may start with a manual -setup (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) -or use the docker image from the tests: -.PP -If you want to build the docker image -.IP -.nf -\f[C] -git clone https://github.com/rclone/rclone.git -cd rclone/fstest/testserver/images/test-hdfs -docker build --rm -t rclone/test-hdfs . -\f[R] -.fi -.PP -Or you can just use the latest one pushed -.IP -.nf -\f[C] -docker run --rm --name \[dq]rclone-hdfs\[dq] -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname \[dq]rclone-hdfs\[dq] rclone/test-hdfs -\f[R] -.fi -.PP -\f[B]NB\f[R] it need few seconds to startup. -.PP -For this docker image the remote needs to be configured like this: -.IP -.nf -\f[C] -[remote] -type = hdfs -namenode = 127.0.0.1:8020 -username = root -\f[R] -.fi -.PP -You can stop this image with \f[C]docker kill rclone-hdfs\f[R] -(\f[B]NB\f[R] it does not use volumes, so all data uploaded will be -lost.) -.SS Modified time -.PP -Time accurate to 1 second is stored. -.SS Checksum -.PP -No checksums are implemented. -.SS Usage information -.PP -You can use the \f[C]rclone about remote:\f[R] command which will -display filesystem size and current usage. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement T} -_ T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] +Now proceed to interactive or manual configuration. +T} +T{ +### Interactive configuration +T} +T{ +Run \f[C]rclone config\f[R]: \[ga]\[ga]\[ga] No remotes found, make a +new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> Hasher1 Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Handle +checksums for other remotes \ \[dq]hasher\[dq] [snip] Storage> hasher +Remote to cache checksums for, like myremote:mypath. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +remote> myRemote:path Comma separated list of supported checksum types. +Enter a string value. +Press Enter for the default (\[dq]md5,sha1\[dq]). +hashsums> md5 Maximum time to keep checksums in cache. +0 = no cache, off = cache forever. +max_age> off Edit advanced config? +(y/n) y) Yes n) No y/n> n Remote config T} .TE .PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8). -.SS Standard options +[Hasher1] type = hasher remote = myRemote:path hashsums = md5 max_age = +off -------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +### Manual configuration + +Run \[ga]rclone config path\[ga] to see the path of current active config file, +usually \[ga]YOURHOME/.config/rclone/rclone.conf\[ga]. +Open it in your favorite text editor, find section for the base remote +and create new section for hasher like in the following examples: +\f[R] +.fi .PP -Here are the Standard options specific to hdfs (Hadoop distributed file -system). -.SS --hdfs-namenode +[Hasher1] type = hasher remote = myRemote:path hashes = md5 max_age = +off .PP +[Hasher2] type = hasher remote = /local/path hashes = dropbox,sha1 +max_age = 24h +.IP +.nf +\f[C] +Hasher takes basically the following parameters: +- \[ga]remote\[ga] is required, +- \[ga]hashes\[ga] is a comma separated list of supported checksums + (by default \[ga]md5,sha1\[ga]), +- \[ga]max_age\[ga] - maximum time to keep a checksum value in the cache, + \[ga]0\[ga] will disable caching completely, + \[ga]off\[ga] will cache \[dq]forever\[dq] (that is until the files get changed). + +Make sure the \[ga]remote\[ga] has \[ga]:\[ga] (colon) in. If you specify the remote without +a colon then rclone will use a local directory of that name. So if you use +a remote of \[ga]/local/path\[ga] then rclone will handle hashes for that directory. +If you use \[ga]remote = name\[ga] literally then rclone will put files +**in a directory called \[ga]name\[ga] located under current directory**. + +## Usage + +### Basic operations + +Now you can use it as \[ga]Hasher2:subdir/file\[ga] instead of base remote. +Hasher will transparently update cache with new checksums when a file +is fully read or overwritten, like: +\f[R] +.fi +.PP +rclone copy External:path/file Hasher:dest/path +.PP +rclone cat Hasher:path/to/file > /dev/null +.IP +.nf +\f[C] +The way to refresh **all** cached checksums (even unsupported by the base backend) +for a subtree is to **re-download** all files in the subtree. For example, +use \[ga]hashsum --download\[ga] using **any** supported hashsum on the command line +(we just care to re-read): +\f[R] +.fi +.PP +rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null +.PP +rclone backend dump Hasher:path/to/subtree +.IP +.nf +\f[C] +You can print or drop hashsum cache using custom backend commands: +\f[R] +.fi +.PP +rclone backend dump Hasher:dir/subdir +.PP +rclone backend drop Hasher: +.IP +.nf +\f[C] +### Pre-Seed from a SUM File + +Hasher supports two backend commands: generic SUM file \[ga]import\[ga] and faster +but less consistent \[ga]stickyimport\[ga]. +\f[R] +.fi +.PP +rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM +[--checkers 4] +.IP +.nf +\f[C] +Instead of SHA1 it can be any hash supported by the remote. The last argument +can point to either a local or an \[ga]other-remote:path\[ga] text file in SUM format. +The command will parse the SUM file, then walk down the path given by the +first argument, snapshot current fingerprints and fill in the cache entries +correspondingly. +- Paths in the SUM file are treated as relative to \[ga]hasher:dir/subdir\[ga]. +- The command will **not** check that supplied values are correct. + You **must know** what you are doing. +- This is a one-time action. The SUM file will not get \[dq]attached\[dq] to the + remote. Cache entries can still be overwritten later, should the object\[aq]s + fingerprint change. +- The tree walk can take long depending on the tree size. You can increase + \[ga]--checkers\[ga] to make it faster. Or use \[ga]stickyimport\[ga] if you don\[aq]t care + about fingerprints and consistency. +\f[R] +.fi +.PP +rclone backend stickyimport hasher:path/to/data sha1 +remote:/path/to/sum.sha1 +.IP +.nf +\f[C] +\[ga]stickyimport\[ga] is similar to \[ga]import\[ga] but works much faster because it +does not need to stat existing files and skips initial tree walk. +Instead of binding cache entries to file fingerprints it creates _sticky_ +entries bound to the file name alone ignoring size, modification time etc. +Such hash entries can be replaced only by \[ga]purge\[ga], \[ga]delete\[ga], \[ga]backend drop\[ga] +or by full re-read/re-write of the files. + +## Configuration reference + + +### Standard options + +Here are the Standard options specific to hasher (Better checksums for other remotes). + +#### --hasher-remote + +Remote to cache checksums for (e.g. myRemote:path). + +Properties: + +- Config: remote +- Env Var: RCLONE_HASHER_REMOTE +- Type: string +- Required: true + +#### --hasher-hashes + +Comma separated list of supported checksum types. + +Properties: + +- Config: hashes +- Env Var: RCLONE_HASHER_HASHES +- Type: CommaSepList +- Default: md5,sha1 + +#### --hasher-max-age + +Maximum time to keep checksums in cache (0 = no cache, off = cache forever). + +Properties: + +- Config: max_age +- Env Var: RCLONE_HASHER_MAX_AGE +- Type: Duration +- Default: off + +### Advanced options + +Here are the Advanced options specific to hasher (Better checksums for other remotes). + +#### --hasher-auto-size + +Auto-update checksum for files smaller than this size (disabled by default). + +Properties: + +- Config: auto_size +- Env Var: RCLONE_HASHER_AUTO_SIZE +- Type: SizeSuffix +- Default: 0 + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + +## Backend commands + +Here are the commands specific to the hasher backend. + +Run them with + + rclone backend COMMAND remote: + +The help below will explain what arguments each command takes. + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + +These can be run on a running backend using the rc command +[backend/command](https://rclone.org/rc/#backend-command). + +### drop + +Drop cache + + rclone backend drop remote: [options] [+] + +Completely drop checksum cache. +Usage Example: + rclone backend drop hasher: + + +### dump + +Dump the database + + rclone backend dump remote: [options] [+] + +Dump cache records covered by the current remote + +### fulldump + +Full dump of the database + + rclone backend fulldump remote: [options] [+] + +Dump all cache records in the database + +### import + +Import a SUM file + + rclone backend import remote: [options] [+] + +Amend hash cache from a SUM file and bind checksums to files by size/time. +Usage Example: + rclone backend import hasher:subdir md5 /path/to/sum.md5 + + +### stickyimport + +Perform fast import of a SUM file + + rclone backend stickyimport remote: [options] [+] + +Fill hash cache from a SUM file without verifying file fingerprints. +Usage Example: + rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 + + + + +## Implementation details (advanced) + +This section explains how various rclone operations work on a hasher remote. + +**Disclaimer. This section describes current implementation which can +change in future rclone versions!.** + +### Hashsum command + +The \[ga]rclone hashsum\[ga] (or \[ga]md5sum\[ga] or \[ga]sha1sum\[ga]) command will: + +1. if requested hash is supported by lower level, just pass it. +2. if object size is below \[ga]auto_size\[ga] then download object and calculate + _requested_ hashes on the fly. +3. if unsupported and the size is big enough, build object \[ga]fingerprint\[ga] + (including size, modtime if supported, first-found _other_ hash if any). +4. if the strict match is found in cache for the requested remote, return + the stored hash. +5. if remote found but fingerprint mismatched, then purge the entry and + proceed to step 6. +6. if remote not found or had no requested hash type or after step 5: + download object, calculate all _supported_ hashes on the fly and store + in cache; return requested hash. + +### Other operations + +- whenever a file is uploaded or downloaded **in full**, capture the stream + to calculate all supported hashes on the fly and update database +- server-side \[ga]move\[ga] will update keys of existing cache entries +- \[ga]deletefile\[ga] will remove a single cache entry +- \[ga]purge\[ga] will remove all cache entries under the purged path + +Note that setting \[ga]max_age = 0\[ga] will disable checksum caching completely. + +If you set \[ga]max_age = off\[ga], checksums in cache will never age, unless you +fully rewrite or delete the file. + +### Cache storage + +Cached checksums are stored as \[ga]bolt\[ga] database files under rclone cache +directory, usually \[ga]\[ti]/.cache/rclone/kv/\[ga]. Databases are maintained +one per _base_ backend, named like \[ga]BaseRemote\[ti]hasher.bolt\[ga]. +Checksums for multiple \[ga]alias\[ga]-es into a single base backend +will be stored in the single database. All local paths are treated as +aliases into the \[ga]local\[ga] backend (unless encrypted or chunked) and stored +in \[ga]\[ti]/.cache/rclone/kv/local\[ti]hasher.bolt\[ga]. +Databases can be shared between multiple rclone processes. + +# HDFS + +[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a +distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework. + +Paths are specified as \[ga]remote:\[ga] or \[ga]remote:path/to/dir\[ga]. + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [skip] XX / Hadoop +distributed file system \ \[dq]hdfs\[dq] [skip] Storage> hdfs ** See +help for hdfs backend at: https://rclone.org/hdfs/ ** +.PP +hadoop name node and port Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Connect to +host namenode at port 8020 \ \[dq]namenode:8020\[dq] namenode> +namenode.hadoop:8020 hadoop user name Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Connect to +hdfs as root \ \[dq]root\[dq] username> root Edit advanced config? +(y/n) y) Yes n) No (default) y/n> n Remote config -------------------- +[remote] type = hdfs namenode = namenode.hadoop:8020 username = root +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y Current remotes: +.PP +Name Type ==== ==== hadoop hdfs +.IP "e)" 3 +Edit existing remote +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> q +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this + +See all the top level directories + + rclone lsd remote: + +List the contents of a directory + + rclone ls remote:directory + +Sync the remote \[ga]directory\[ga] to \[ga]/home/local/directory\[ga], deleting any excess files. + + rclone sync --interactive remote:directory /home/local/directory + +### Setting up your own HDFS instance for testing + +You may start with a [manual setup](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) +or use the docker image from the tests: + +If you want to build the docker image +\f[R] +.fi +.PP +git clone https://github.com/rclone/rclone.git cd +rclone/fstest/testserver/images/test-hdfs docker build --rm -t +rclone/test-hdfs . +.IP +.nf +\f[C] +Or you can just use the latest one pushed +\f[R] +.fi +.PP +docker run --rm --name \[dq]rclone-hdfs\[dq] -p 127.0.0.1:9866:9866 -p +127.0.0.1:8020:8020 --hostname \[dq]rclone-hdfs\[dq] rclone/test-hdfs +.IP +.nf +\f[C] +**NB** it need few seconds to startup. + +For this docker image the remote needs to be configured like this: +\f[R] +.fi +.PP +[remote] type = hdfs namenode = 127.0.0.1:8020 username = root +.IP +.nf +\f[C] +You can stop this image with \[ga]docker kill rclone-hdfs\[ga] (**NB** it does not use volumes, so all data +uploaded will be lost.) + +### Modified time + +Time accurate to 1 second is stored. + +### Checksum + +No checksums are implemented. + +### Usage information + +You can use the \[ga]rclone about remote:\[ga] command which will display filesystem size and current usage. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| : | 0x3A | \[uFF1A] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8). + + +### Standard options + +Here are the Standard options specific to hdfs (Hadoop distributed file system). + +#### --hdfs-namenode + Hadoop name node and port. -.PP -E.g. -\[dq]namenode:8020\[dq] to connect to host namenode at port 8020. -.PP + +E.g. \[dq]namenode:8020\[dq] to connect to host namenode at port 8020. + Properties: -.IP \[bu] 2 -Config: namenode -.IP \[bu] 2 -Env Var: RCLONE_HDFS_NAMENODE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --hdfs-username -.PP + +- Config: namenode +- Env Var: RCLONE_HDFS_NAMENODE +- Type: string +- Required: true + +#### --hdfs-username + Hadoop user name. -.PP + Properties: -.IP \[bu] 2 -Config: username -.IP \[bu] 2 -Env Var: RCLONE_HDFS_USERNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]root\[dq] -.RS 2 -.IP \[bu] 2 -Connect to hdfs as root. -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to hdfs (Hadoop distributed file -system). -.SS --hdfs-service-principal-name -.PP + +- Config: username +- Env Var: RCLONE_HDFS_USERNAME +- Type: string +- Required: false +- Examples: + - \[dq]root\[dq] + - Connect to hdfs as root. + +### Advanced options + +Here are the Advanced options specific to hdfs (Hadoop distributed file system). + +#### --hdfs-service-principal-name + Kerberos service principal name for the namenode. -.PP -Enables KERBEROS authentication. -Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. -E.g. -\[dq]hdfs/namenode.hadoop.docker\[dq] for namenode running as service -\[aq]hdfs\[aq] with FQDN \[aq]namenode.hadoop.docker\[aq]. -.PP + +Enables KERBEROS authentication. Specifies the Service Principal Name +(SERVICE/FQDN) for the namenode. E.g. \[rs]\[dq]hdfs/namenode.hadoop.docker\[rs]\[dq] +for namenode running as service \[aq]hdfs\[aq] with FQDN \[aq]namenode.hadoop.docker\[aq]. + Properties: -.IP \[bu] 2 -Config: service_principal_name -.IP \[bu] 2 -Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hdfs-data-transfer-protection -.PP + +- Config: service_principal_name +- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME +- Type: string +- Required: false + +#### --hdfs-data-transfer-protection + Kerberos data transfer protection: authentication|integrity|privacy. -.PP + Specifies whether or not authentication, data signature integrity -checks, and wire encryption are required when communicating with the -datanodes. -Possible values are \[aq]authentication\[aq], \[aq]integrity\[aq] and -\[aq]privacy\[aq]. -Used only with KERBEROS enabled. -.PP +checks, and wire encryption are required when communicating with +the datanodes. Possible values are \[aq]authentication\[aq], \[aq]integrity\[aq] +and \[aq]privacy\[aq]. Used only with KERBEROS enabled. + Properties: -.IP \[bu] 2 -Config: data_transfer_protection -.IP \[bu] 2 -Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]privacy\[dq] -.RS 2 -.IP \[bu] 2 -Ensure authentication, integrity and encryption enabled. -.RE -.RE -.SS --hdfs-encoding -.PP + +- Config: data_transfer_protection +- Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION +- Type: string +- Required: false +- Examples: + - \[dq]privacy\[dq] + - Ensure authentication, integrity and encryption enabled. + +#### --hdfs-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_HDFS_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.IP \[bu] 2 -No server-side \f[C]Move\f[R] or \f[C]DirMove\f[R]. -.IP \[bu] 2 -Checksums not implemented. -.SH HiDrive -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.PP + +- Config: encoding +- Env Var: RCLONE_HDFS_ENCODING +- Type: MultiEncoder +- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +- No server-side \[ga]Move\[ga] or \[ga]DirMove\[ga]. +- Checksums not implemented. + +# HiDrive + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found - make a new one -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / HiDrive - \[rs] \[dq]hidrive\[dq] -[snip] -Storage> hidrive -OAuth Client Id - Leave blank normally. -client_id> -OAuth Client Secret - Leave blank normally. -client_secret> -Access permissions that rclone should use when requesting access from HiDrive. -Leave blank normally. -scope_access> -Edit advanced config? -y/n> n -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -type = hidrive -token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]expiry\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq]} --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -\f[B]You should be aware that OAuth-tokens can be used to access your -account and hence should not be shared with other persons.\f[R] See the -below section for more information. -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from HiDrive. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -The webserver runs on \f[C]http://127.0.0.1:53682/\f[R]. -If local port \f[C]53682\f[R] is protected by a firewall you may need to -temporarily unblock the firewall to complete authorization. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your HiDrive root folder -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your HiDrive filesystem -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to a HiDrive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Keeping your tokens safe -.PP -Any OAuth-tokens will be stored by rclone in the remote\[aq]s -configuration file as unencrypted text. -Anyone can use a valid refresh-token to access your HiDrive filesystem -without knowing your password. -Therefore you should make sure no one else can access your -configuration. -.PP -It is possible to encrypt rclone\[aq]s configuration file. -You can find information on securing your configuration file by viewing -the configuration encryption -docs (https://rclone.org/docs/#configuration-encryption). -.SS Invalid refresh token -.PP -As can be verified here (https://developer.hidrive.com/basics-flows/), -each \f[C]refresh_token\f[R] (for Native Applications) is valid for 60 -days. -If used to access HiDrivei, its validity will be automatically extended. -.PP -This means that if you -.IP \[bu] 2 -Don\[aq]t use the HiDrive remote for 60 days -.PP -then rclone will return an error which includes a text that implies the -refresh token is \f[I]invalid\f[R] or \f[I]expired\f[R]. -.PP -To fix this you will need to authorize rclone to access your HiDrive -account again. -.PP -Using -.IP -.nf -\f[C] -rclone config reconnect remote: -\f[R] -.fi -.PP -the process is very similar to the process of initial setup exemplified -before. -.SS Modified time and hashes -.PP -HiDrive allows modification times to be set on objects accurate to 1 -second. -.PP -HiDrive supports its own hash type (https://static.hidrive.com/dev/0001) -which is used to verify the integrity of file contents after successful -transfers. -.SS Restricted filename characters -.PP -HiDrive cannot store files or folders that include \f[C]/\f[R] (0x2F) or -null-bytes (0x00) in their name. -Any other characters can be used in the names of files or folders. -Additionally, files or folders cannot be named either of the following: -\f[C].\f[R] or \f[C]..\f[R] -.PP -Therefore rclone will automatically replace these characters, if files -or folders are stored or accessed with such names. -.PP -You can read about how this filename encoding works in general here. -.PP -Keep in mind that HiDrive only supports file or folder names with a -length of 255 characters or less. -.SS Transfers -.PP -HiDrive limits file sizes per single request to a maximum of 2 GiB. -To allow storage of larger files and allow for better upload -performance, the hidrive backend will use a chunked transfer for files -larger than 96 MiB. -Rclone will upload multiple parts/chunks of the file at the same time. -Chunks in the process of being uploaded are buffered in memory, so you -may want to restrict this behaviour on systems with limited resources. -.PP -You can customize this behaviour using the following options: -.IP \[bu] 2 -\f[C]chunk_size\f[R]: size of file parts -.IP \[bu] 2 -\f[C]upload_cutoff\f[R]: files larger or equal to this in size will use -a chunked transfer -.IP \[bu] 2 -\f[C]upload_concurrency\f[R]: number of file-parts to upload at the same -time -.PP -See the below section about configuration options for more details. -.SS Root folder -.PP -You can set the root folder for rclone. -This is the directory that rclone considers to be the root of your -HiDrive. -.PP -Usually, you will leave this blank, and rclone will use the root of the -account. -.PP -However, you can set this to restrict rclone to a specific folder -hierarchy. -.PP -This works by prepending the contents of the \f[C]root_prefix\f[R] -option to any paths accessed by rclone. -For example, the following two ways to access the home directory are -equivalent: -.IP -.nf -\f[C] -rclone lsd --hidrive-root-prefix=\[dq]/users/test/\[dq] remote:path +\[ga]rclone config\[ga] walks you through it. -rclone lsd remote:/users/test/path +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP +No remotes found - make a new one n) New remote s) Set configuration +password q) Quit config n/s/q> n name> remote Type of storage to +configure. +Choose a number from below, or type in your own value [snip] XX / +HiDrive \ \[dq]hidrive\[dq] [snip] Storage> hidrive OAuth Client Id - +Leave blank normally. +client_id> OAuth Client Secret - Leave blank normally. +client_secret> Access permissions that rclone should use when requesting +access from HiDrive. +Leave blank normally. +scope_access> Edit advanced config? +y/n> n Use web browser to automatically authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y/n> y If your browser doesn\[aq]t open automatically go to the +following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +Log in and authorize rclone for access Waiting for code... +Got code -------------------- [remote] type = hidrive token = +{\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]expiry\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq]} +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y +.IP +.nf +\f[C] +**You should be aware that OAuth-tokens can be used to access your account +and hence should not be shared with other persons.** +See the [below section](#keeping-your-tokens-safe) for more information. + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from HiDrive. This only runs from the moment it opens +your browser to the moment you get back the verification code. +The webserver runs on \[ga]http://127.0.0.1:53682/\[ga]. +If local port \[ga]53682\[ga] is protected by a firewall you may need to temporarily +unblock the firewall to complete authorization. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your HiDrive root folder + + rclone lsd remote: + +List all the files in your HiDrive filesystem + + rclone ls remote: + +To copy a local directory to a HiDrive directory called backup + + rclone copy /home/source remote:backup + +### Keeping your tokens safe + +Any OAuth-tokens will be stored by rclone in the remote\[aq]s configuration file as unencrypted text. +Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. +Therefore you should make sure no one else can access your configuration. + +It is possible to encrypt rclone\[aq]s configuration file. +You can find information on securing your configuration file by viewing the [configuration encryption docs](https://rclone.org/docs/#configuration-encryption). + +### Invalid refresh token + +As can be verified [here](https://developer.hidrive.com/basics-flows/), +each \[ga]refresh_token\[ga] (for Native Applications) is valid for 60 days. +If used to access HiDrivei, its validity will be automatically extended. + +This means that if you + + * Don\[aq]t use the HiDrive remote for 60 days + +then rclone will return an error which includes a text +that implies the refresh token is *invalid* or *expired*. + +To fix this you will need to authorize rclone to access your HiDrive account again. + +Using + + rclone config reconnect remote: + +the process is very similar to the process of initial setup exemplified before. + +### Modified time and hashes + +HiDrive allows modification times to be set on objects accurate to 1 second. + +HiDrive supports [its own hash type](https://static.hidrive.com/dev/0001) +which is used to verify the integrity of file contents after successful transfers. + +### Restricted filename characters + +HiDrive cannot store files or folders that include +\[ga]/\[ga] (0x2F) or null-bytes (0x00) in their name. +Any other characters can be used in the names of files or folders. +Additionally, files or folders cannot be named either of the following: \[ga].\[ga] or \[ga]..\[ga] + +Therefore rclone will automatically replace these characters, +if files or folders are stored or accessed with such names. + +You can read about how this filename encoding works in general +[here](overview/#restricted-filenames). + +Keep in mind that HiDrive only supports file or folder names +with a length of 255 characters or less. + +### Transfers + +HiDrive limits file sizes per single request to a maximum of 2 GiB. +To allow storage of larger files and allow for better upload performance, +the hidrive backend will use a chunked transfer for files larger than 96 MiB. +Rclone will upload multiple parts/chunks of the file at the same time. +Chunks in the process of being uploaded are buffered in memory, +so you may want to restrict this behaviour on systems with limited resources. + +You can customize this behaviour using the following options: + +* \[ga]chunk_size\[ga]: size of file parts +* \[ga]upload_cutoff\[ga]: files larger or equal to this in size will use a chunked transfer +* \[ga]upload_concurrency\[ga]: number of file-parts to upload at the same time + See the below section about configuration options for more details. -.SS Directory member count -.PP -By default, rclone will know the number of directory members contained -in a directory. -For example, \f[C]rclone lsd\f[R] uses this information. -.PP -The acquisition of this information will result in additional time costs -for HiDrive\[aq]s API. -When dealing with large directory structures, it may be desirable to -circumvent this time cost, especially when this information is not -explicitly needed. -For this, the \f[C]disable_fetching_member_count\f[R] option can be -used. -.PP + +### Root folder + +You can set the root folder for rclone. +This is the directory that rclone considers to be the root of your HiDrive. + +Usually, you will leave this blank, and rclone will use the root of the account. + +However, you can set this to restrict rclone to a specific folder hierarchy. + +This works by prepending the contents of the \[ga]root_prefix\[ga] option +to any paths accessed by rclone. +For example, the following two ways to access the home directory are equivalent: + + rclone lsd --hidrive-root-prefix=\[dq]/users/test/\[dq] remote:path + + rclone lsd remote:/users/test/path + See the below section about configuration options for more details. -.SS Standard options -.PP + +### Directory member count + +By default, rclone will know the number of directory members contained in a directory. +For example, \[ga]rclone lsd\[ga] uses this information. + +The acquisition of this information will result in additional time costs for HiDrive\[aq]s API. +When dealing with large directory structures, it may be desirable to circumvent this time cost, +especially when this information is not explicitly needed. +For this, the \[ga]disable_fetching_member_count\[ga] option can be used. + +See the below section about configuration options for more details. + + +### Standard options + Here are the Standard options specific to hidrive (HiDrive). -.SS --hidrive-client-id -.PP + +#### --hidrive-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hidrive-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_HIDRIVE_CLIENT_ID +- Type: string +- Required: false + +#### --hidrive-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hidrive-scope-access -.PP -Access permissions that rclone should use when requesting access from -HiDrive. -.PP + +- Config: client_secret +- Env Var: RCLONE_HIDRIVE_CLIENT_SECRET +- Type: string +- Required: false + +#### --hidrive-scope-access + +Access permissions that rclone should use when requesting access from HiDrive. + Properties: -.IP \[bu] 2 -Config: scope_access -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]rw\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]rw\[dq] -.RS 2 -.IP \[bu] 2 -Read and write access to resources. -.RE -.IP \[bu] 2 -\[dq]ro\[dq] -.RS 2 -.IP \[bu] 2 -Read-only access to resources. -.RE -.RE -.SS Advanced options -.PP + +- Config: scope_access +- Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS +- Type: string +- Default: \[dq]rw\[dq] +- Examples: + - \[dq]rw\[dq] + - Read and write access to resources. + - \[dq]ro\[dq] + - Read-only access to resources. + +### Advanced options + Here are the Advanced options specific to hidrive (HiDrive). -.SS --hidrive-token -.PP + +#### --hidrive-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hidrive-auth-url -.PP + +- Config: token +- Env Var: RCLONE_HIDRIVE_TOKEN +- Type: string +- Required: false + +#### --hidrive-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hidrive-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_HIDRIVE_AUTH_URL +- Type: string +- Required: false + +#### --hidrive-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --hidrive-scope-role -.PP + +- Config: token_url +- Env Var: RCLONE_HIDRIVE_TOKEN_URL +- Type: string +- Required: false + +#### --hidrive-scope-role + User-level that rclone should use when requesting access from HiDrive. -.PP + Properties: -.IP \[bu] 2 -Config: scope_role -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_SCOPE_ROLE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]user\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]user\[dq] -.RS 2 -.IP \[bu] 2 -User-level access to management permissions. -.IP \[bu] 2 -This will be sufficient in most cases. -.RE -.IP \[bu] 2 -\[dq]admin\[dq] -.RS 2 -.IP \[bu] 2 -Extensive access to management permissions. -.RE -.IP \[bu] 2 -\[dq]owner\[dq] -.RS 2 -.IP \[bu] 2 -Full access to management permissions. -.RE -.RE -.SS --hidrive-root-prefix -.PP + +- Config: scope_role +- Env Var: RCLONE_HIDRIVE_SCOPE_ROLE +- Type: string +- Default: \[dq]user\[dq] +- Examples: + - \[dq]user\[dq] + - User-level access to management permissions. + - This will be sufficient in most cases. + - \[dq]admin\[dq] + - Extensive access to management permissions. + - \[dq]owner\[dq] + - Full access to management permissions. + +#### --hidrive-root-prefix + The root/parent folder for all paths. -.PP -Fill in to use the specified folder as the parent for all paths given to -the remote. + +Fill in to use the specified folder as the parent for all paths given to the remote. This way rclone can use any folder as its starting point. -.PP + Properties: -.IP \[bu] 2 -Config: root_prefix -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_ROOT_PREFIX -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]/\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]/\[dq] -.RS 2 -.IP \[bu] 2 -The topmost directory accessible by rclone. -.IP \[bu] 2 -This will be equivalent with \[dq]root\[dq] if rclone uses a regular -HiDrive user account. -.RE -.IP \[bu] 2 -\[dq]root\[dq] -.RS 2 -.IP \[bu] 2 -The topmost directory of the HiDrive user account -.RE -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -This specifies that there is no root-prefix for your paths. -.IP \[bu] 2 -When using this you will always need to specify paths to this remote -with a valid parent e.g. -\[dq]remote:/path/to/dir\[dq] or \[dq]remote:root/path/to/dir\[dq]. -.RE -.RE -.SS --hidrive-endpoint -.PP + +- Config: root_prefix +- Env Var: RCLONE_HIDRIVE_ROOT_PREFIX +- Type: string +- Default: \[dq]/\[dq] +- Examples: + - \[dq]/\[dq] + - The topmost directory accessible by rclone. + - This will be equivalent with \[dq]root\[dq] if rclone uses a regular HiDrive user account. + - \[dq]root\[dq] + - The topmost directory of the HiDrive user account + - \[dq]\[dq] + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to this remote with a valid parent e.g. \[dq]remote:/path/to/dir\[dq] or \[dq]remote:root/path/to/dir\[dq]. + +#### --hidrive-endpoint + Endpoint for the service. -.PP + This is the URL that API-calls will be made to. -.PP + Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]https://api.hidrive.strato.com/2.1\[dq] -.SS --hidrive-disable-fetching-member-count -.PP -Do not fetch number of objects in directories unless it is absolutely -necessary. -.PP -Requests may be faster if the number of objects in subdirectories is not -fetched. -.PP + +- Config: endpoint +- Env Var: RCLONE_HIDRIVE_ENDPOINT +- Type: string +- Default: \[dq]https://api.hidrive.strato.com/2.1\[dq] + +#### --hidrive-disable-fetching-member-count + +Do not fetch number of objects in directories unless it is absolutely necessary. + +Requests may be faster if the number of objects in subdirectories is not fetched. + Properties: -.IP \[bu] 2 -Config: disable_fetching_member_count -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --hidrive-chunk-size -.PP + +- Config: disable_fetching_member_count +- Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT +- Type: bool +- Default: false + +#### --hidrive-chunk-size + Chunksize for chunked uploads. -.PP -Any files larger than the configured cutoff (or files of unknown size) -will be uploaded in chunks of this size. -.PP + +Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size. + The upper limit for this is 2147483647 bytes (about 2.000Gi). -That is the maximum amount of bytes a single upload-operation will -support. -Setting this above the upper limit or to a negative value will cause -uploads to fail. -.PP -Setting this to larger values may increase the upload speed at the cost -of using more memory. +That is the maximum amount of bytes a single upload-operation will support. +Setting this above the upper limit or to a negative value will cause uploads to fail. + +Setting this to larger values may increase the upload speed at the cost of using more memory. It can be set to smaller values smaller to save on memory. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 48Mi -.SS --hidrive-upload-cutoff -.PP + +- Config: chunk_size +- Env Var: RCLONE_HIDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 48Mi + +#### --hidrive-upload-cutoff + Cutoff/Threshold for chunked uploads. -.PP -Any files larger than this will be uploaded in chunks of the configured -chunksize. -.PP + +Any files larger than this will be uploaded in chunks of the configured chunksize. + The upper limit for this is 2147483647 bytes (about 2.000Gi). -That is the maximum amount of bytes a single upload-operation will -support. +That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit will cause uploads to fail. -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 96Mi -.SS --hidrive-upload-concurrency -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 96Mi + +#### --hidrive-upload-concurrency + Concurrency for chunked uploads. -.PP -This is the upper limit for how many transfers for the same file are -running concurrently. -Setting this above to a value smaller than 1 will cause uploads to -deadlock. -.PP + +This is the upper limit for how many transfers for the same file are running concurrently. +Setting this above to a value smaller than 1 will cause uploads to deadlock. + If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. -.PP + Properties: -.IP \[bu] 2 -Config: upload_concurrency -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 4 -.SS --hidrive-encoding -.PP + +- Config: upload_concurrency +- Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY +- Type: int +- Default: 4 + +#### --hidrive-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_HIDRIVE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Dot -.SS Limitations -.SS Symbolic links -.PP -HiDrive is able to store symbolic links (\f[I]symlinks\f[R]) by design, + +- Config: encoding +- Env Var: RCLONE_HIDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,Dot + + + +## Limitations + +### Symbolic links + +HiDrive is able to store symbolic links (*symlinks*) by design, for example, when unpacked from a zip archive. -.PP + There exists no direct mechanism to manage native symlinks in remotes. -As such this implementation has chosen to ignore any native symlinks -present in the remote. -rclone will not be able to access or show any symlinks stored in the -hidrive-remote. +As such this implementation has chosen to ignore any native symlinks present in the remote. +rclone will not be able to access or show any symlinks stored in the hidrive-remote. This means symlinks cannot be individually removed, copied, or moved, except when removing, copying, or moving the parent folder. -.PP -\f[I]This does not affect the \f[CI].rclonelink\f[I]-files that rclone -uses to encode and store symbolic links.\f[R] -.SS Sparse files -.PP + +*This does not affect the \[ga].rclonelink\[ga]-files +that rclone uses to encode and store symbolic links.* + +### Sparse files + It is possible to store sparse files in HiDrive. -.PP -Note that copying a sparse file will expand the holes into null-byte -(0x00) regions that will then consume disk space. -Likewise, when downloading a sparse file, the resulting file will have -null-byte regions in the place of file holes. -.SH HTTP -.PP -The HTTP remote is a read only remote for reading files of a webserver. -The webserver should provide file listings which rclone will read and -turn into a remote. -This has been tested with common webservers such as Apache/Nginx/Caddy -and will likely work with file listings from most web servers. -(If it doesn\[aq]t then please file an issue, or send a pull request!) -.PP -Paths are specified as \f[C]remote:\f[R] or \f[C]remote:path\f[R]. -.PP -The \f[C]remote:\f[R] represents the configured url, and any path -following it will be resolved relative to this url, according to the URL -standard. -This means with remote url \f[C]https://beta.rclone.org/branch\f[R] and -path \f[C]fix\f[R], the resolved URL will be -\f[C]https://beta.rclone.org/branch/fix\f[R], while with path -\f[C]/fix\f[R] the resolved URL will be -\f[C]https://beta.rclone.org/fix\f[R] as the absolute path is resolved -from the root of the domain. -.PP -If the path following the \f[C]remote:\f[R] ends with \f[C]/\f[R] it -will be assumed to point to a directory. -If the path does not end with \f[C]/\f[R], then a HEAD request is sent -and the response used to decide if it it is treated as a file or a -directory (run with \f[C]-vv\f[R] to see details). -When --http-no-head is specified, a path without ending \f[C]/\f[R] is -always assumed to be a file. -If rclone incorrectly assumes the path is a file, the solution is to -specify the path with ending \f[C]/\f[R]. -When you know the path is a directory, ending it with \f[C]/\f[R] is -always better as it avoids the initial HEAD request. -.PP + +Note that copying a sparse file will expand the holes +into null-byte (0x00) regions that will then consume disk space. +Likewise, when downloading a sparse file, +the resulting file will have null-byte regions in the place of file holes. + +# HTTP + +The HTTP remote is a read only remote for reading files of a +webserver. The webserver should provide file listings which rclone +will read and turn into a remote. This has been tested with common +webservers such as Apache/Nginx/Caddy and will likely work with file +listings from most web servers. (If it doesn\[aq]t then please file an +issue, or send a pull request!) + +Paths are specified as \[ga]remote:\[ga] or \[ga]remote:path\[ga]. + +The \[ga]remote:\[ga] represents the configured [url](#http-url), and any path following +it will be resolved relative to this url, according to the URL standard. This +means with remote url \[ga]https://beta.rclone.org/branch\[ga] and path \[ga]fix\[ga], the +resolved URL will be \[ga]https://beta.rclone.org/branch/fix\[ga], while with path +\[ga]/fix\[ga] the resolved URL will be \[ga]https://beta.rclone.org/fix\[ga] as the absolute +path is resolved from the root of the domain. + +If the path following the \[ga]remote:\[ga] ends with \[ga]/\[ga] it will be assumed to point +to a directory. If the path does not end with \[ga]/\[ga], then a HEAD request is sent +and the response used to decide if it it is treated as a file or a directory +(run with \[ga]-vv\[ga] to see details). When [--http-no-head](#http-no-head) is +specified, a path without ending \[ga]/\[ga] is always assumed to be a file. If rclone +incorrectly assumes the path is a file, the solution is to specify the path with +ending \[ga]/\[ga]. When you know the path is a directory, ending it with \[ga]/\[ga] is always +better as it avoids the initial HEAD request. + To just download a single file it is easier to use -copyurl (https://rclone.org/commands/rclone_copyurl/). -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +[copyurl](https://rclone.org/commands/rclone_copyurl/). + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First +run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / HTTP - \[rs] \[dq]http\[dq] -[snip] -Storage> http -URL of http host to connect to -Choose a number from below, or type in your own value - 1 / Connect to example.com - \[rs] \[dq]https://example.com\[dq] -url> https://beta.rclone.org -Remote config --------------------- -[remote] -url = https://beta.rclone.org --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -remote http - -e) Edit existing remote -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> q -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / HTTP +\ \[dq]http\[dq] [snip] Storage> http URL of http host to connect to +Choose a number from below, or type in your own value 1 / Connect to +example.com \ \[dq]https://example.com\[dq] url> https://beta.rclone.org +Remote config -------------------- [remote] url = +https://beta.rclone.org -------------------- y) Yes this is OK e) Edit +this remote d) Delete this remote y/e/d> y Current remotes: .PP +Name Type ==== ==== remote http +.IP "e)" 3 +Edit existing remote +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> q +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this + See all the top level directories -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List the contents of a directory -.IP -.nf -\f[C] -rclone ls remote:directory -\f[R] -.fi -.PP -Sync the remote \f[C]directory\f[R] to \f[C]/home/local/directory\f[R], -deleting any excess files. -.IP -.nf -\f[C] -rclone sync --interactive remote:directory /home/local/directory -\f[R] -.fi -.SS Read only -.PP + + rclone ls remote:directory + +Sync the remote \[ga]directory\[ga] to \[ga]/home/local/directory\[ga], deleting any excess files. + + rclone sync --interactive remote:directory /home/local/directory + +### Read only + This remote is read only - you can\[aq]t upload files to an HTTP server. -.SS Modified time -.PP + +### Modified time + Most HTTP servers store time accurate to 1 second. -.SS Checksum -.PP + +### Checksum + No checksums are stored. -.SS Usage without a config file -.PP + +### Usage without a config file + Since the http remote only has one config parameter it is easy to use without a config file: -.IP -.nf -\f[C] -rclone lsd --http-url https://beta.rclone.org :http: -\f[R] -.fi -.PP + + rclone lsd --http-url https://beta.rclone.org :http: + or: -.IP -.nf -\f[C] -rclone lsd :http,url=\[aq]https://beta.rclone.org\[aq]: -\f[R] -.fi -.SS Standard options -.PP + + rclone lsd :http,url=\[aq]https://beta.rclone.org\[aq]: + + +### Standard options + Here are the Standard options specific to http (HTTP). -.SS --http-url -.PP + +#### --http-url + URL of HTTP host to connect to. -.PP -E.g. -\[dq]https://example.com\[dq], or -\[dq]https://user:pass\[at]example.com\[dq] to use a username and -password. -.PP + +E.g. \[dq]https://example.com\[dq], or \[dq]https://user:pass\[at]example.com\[dq] to use a username and password. + Properties: -.IP \[bu] 2 -Config: url -.IP \[bu] 2 -Env Var: RCLONE_HTTP_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP + +- Config: url +- Env Var: RCLONE_HTTP_URL +- Type: string +- Required: true + +### Advanced options + Here are the Advanced options specific to http (HTTP). -.SS --http-headers -.PP + +#### --http-headers + Set HTTP headers for all transactions. -.PP + Use this to set additional HTTP headers for all transactions. -.PP -The input format is comma separated list of key,value pairs. -Standard CSV encoding (https://godoc.org/encoding/csv) may be used. -.PP -For example, to set a Cookie use \[aq]Cookie,name=value\[aq], or -\[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq]\[aq]. -.PP -You can set multiple headers, e.g. -\[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq],\[dq]Authorization\[dq],\[dq]xxx\[dq]\[aq]. -.PP + +The input format is comma separated list of key,value pairs. Standard +[CSV encoding](https://godoc.org/encoding/csv) may be used. + +For example, to set a Cookie use \[aq]Cookie,name=value\[aq], or \[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq]\[aq]. + +You can set multiple headers, e.g. \[aq]\[dq]Cookie\[dq],\[dq]name=value\[dq],\[dq]Authorization\[dq],\[dq]xxx\[dq]\[aq]. + Properties: -.IP \[bu] 2 -Config: headers -.IP \[bu] 2 -Env Var: RCLONE_HTTP_HEADERS -.IP \[bu] 2 -Type: CommaSepList -.IP \[bu] 2 -Default: -.SS --http-no-slash -.PP + +- Config: headers +- Env Var: RCLONE_HTTP_HEADERS +- Type: CommaSepList +- Default: + +#### --http-no-slash + Set this if the site doesn\[aq]t end directories with /. -.PP + Use this if your target website does not use / on the end of directories. -.PP + A / on the end of a path is how rclone normally tells the difference -between files and directories. -If this flag is set, then rclone will treat all files with Content-Type: -text/html as directories and read URLs from them rather than downloading -them. -.PP +between files and directories. If this flag is set, then rclone will +treat all files with Content-Type: text/html as directories and read +URLs from them rather than downloading them. + Note that this may cause rclone to confuse genuine HTML files with directories. -.PP + Properties: -.IP \[bu] 2 -Config: no_slash -.IP \[bu] 2 -Env Var: RCLONE_HTTP_NO_SLASH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --http-no-head -.PP + +- Config: no_slash +- Env Var: RCLONE_HTTP_NO_SLASH +- Type: bool +- Default: false + +#### --http-no-head + Don\[aq]t use HEAD requests. -.PP + HEAD requests are mainly used to find file sizes in dir listing. If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: -.IP \[bu] 2 -find its size -.IP \[bu] 2 -check it really exists -.IP \[bu] 2 -check to see if it is a directory -.PP -If you set this option, rclone will not do the HEAD request. -This will mean that directory listings are much quicker, but rclone -won\[aq]t have the times or sizes of any files, and some files that -don\[aq]t exist may be in the listing. -.PP + +- find its size +- check it really exists +- check to see if it is a directory + +If you set this option, rclone will not do the HEAD request. This will mean +that directory listings are much quicker, but rclone won\[aq]t have the times or +sizes of any files, and some files that don\[aq]t exist may be in the listing. + Properties: -.IP \[bu] 2 -Config: no_head -.IP \[bu] 2 -Env Var: RCLONE_HTTP_NO_HEAD -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS Limitations -.PP -\f[C]rclone about\f[R] is not supported by the HTTP backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Internet Archive -.PP -The Internet Archive backend utilizes Items on -archive.org (https://archive.org/) -.PP -Refer to IAS3 API -documentation (https://archive.org/services/docs/api/ias3.html) for the -API this backend uses. -.PP -Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for -the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:item/path/to/dir\f[R]. -.PP + +- Config: no_head +- Env Var: RCLONE_HTTP_NO_HEAD +- Type: bool +- Default: false + + + +## Limitations + +\[ga]rclone about\[ga] is not supported by the HTTP backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Internet Archive + +The Internet Archive backend utilizes Items on [archive.org](https://archive.org/) + +Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses. + +Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. \[ga]remote:item/path/to/dir\[ga]. + Unlike S3, listing up all items uploaded by you isn\[aq]t supported. -.PP + Once you have made a remote, you can use it like this: -.PP + Make a new item -.IP -.nf -\f[C] -rclone mkdir remote:item -\f[R] -.fi -.PP + + rclone mkdir remote:item + List the contents of a item -.IP -.nf -\f[C] -rclone ls remote:item -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote item, deleting any -excess files in the item. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:item -\f[R] -.fi -.SS Notes -.PP -Because of Internet Archive\[aq]s architecture, it enqueues write -operations (and extra post-processings) in a per-item queue. -You can check item\[aq]s queue at -https://catalogd.archive.org/history/item-name-here . -Because of that, all uploads/deletes will not show up immediately and -takes some time to be available. -The per-item queue is enqueued to an another queue, Item Deriver Queue. -You can check the status of Item Deriver Queue -here. (https://catalogd.archive.org/catalog.php?whereami=1) This queue -has a limit, and it may block you from uploading, or even deleting. -You should avoid uploading a lot of small files for better behavior. -.PP -You can optionally wait for the server\[aq]s processing to finish, by -setting non-zero value to \f[C]wait_archive\f[R] key. + + rclone ls remote:item + +Sync \[ga]/home/local/directory\[ga] to the remote item, deleting any excess +files in the item. + + rclone sync --interactive /home/local/directory remote:item + +## Notes +Because of Internet Archive\[aq]s architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item\[aq]s queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. +The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior. + +You can optionally wait for the server\[aq]s processing to finish, by setting non-zero value to \[ga]wait_archive\[ga] key. By making it wait, rclone can do normal file comparison. -Make sure to set a large enough value (e.g. -\f[C]30m0s\f[R] for smaller files) as it can take a long time depending -on server\[aq]s queue. -.SS About metadata -.PP -This backend supports setting, updating and reading metadata of each -file. +Make sure to set a large enough value (e.g. \[ga]30m0s\[ga] for smaller files) as it can take a long time depending on server\[aq]s queue. + +## About metadata +This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone. -.PP -The following are reserved by Internet Archive: - \f[C]name\f[R] - -\f[C]source\f[R] - \f[C]size\f[R] - \f[C]md5\f[R] - \f[C]crc32\f[R] - -\f[C]sha1\f[R] - \f[C]format\f[R] - \f[C]old_version\f[R] - -\f[C]viruscheck\f[R] - \f[C]summation\f[R] -.PP + +The following are reserved by Internet Archive: +- \[ga]name\[ga] +- \[ga]source\[ga] +- \[ga]size\[ga] +- \[ga]md5\[ga] +- \[ga]crc32\[ga] +- \[ga]sha1\[ga] +- \[ga]format\[ga] +- \[ga]old_version\[ga] +- \[ga]viruscheck\[ga] +- \[ga]summation\[ga] + Trying to set values to these keys is ignored with a warning. -Only setting \f[C]mtime\f[R] is an exception. -Doing so make it the identical behavior as setting ModTime. -.PP -rclone reserves all the keys starting with \f[C]rclone-\f[R]. -Setting value for these keys will give you warnings, but values are set -according to request. -.PP +Only setting \[ga]mtime\[ga] is an exception. Doing so make it the identical behavior as setting ModTime. + +rclone reserves all the keys starting with \[ga]rclone-\[ga]. Setting value for these keys will give you warnings, but values are set according to request. + If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy. -.PP -Reading metadata will also provide custom (non-standard nor reserved) -ones. -.SS Filtering auto generated files -.PP -The Internet Archive automatically creates metadata files after upload. -These can cause problems when doing an \f[C]rclone sync\f[R] as rclone -will try, and fail, to delete them. -These metadata files are not changeable, as they are created by the -Internet Archive automatically. -.PP -These auto-created files can be excluded from the sync using metadata -filtering (https://rclone.org/filtering/#metadata). -.IP -.nf -\f[C] -rclone sync ... --metadata-exclude \[dq]source=metadata\[dq] --metadata-exclude \[dq]format=Metadata\[dq] -\f[R] -.fi -.PP + +Reading metadata will also provide custom (non-standard nor reserved) ones. + +## Filtering auto generated files + +The Internet Archive automatically creates metadata files after +upload. These can cause problems when doing an \[ga]rclone sync\[ga] as rclone +will try, and fail, to delete them. These metadata files are not +changeable, as they are created by the Internet Archive automatically. + +These auto-created files can be excluded from the sync using [metadata +filtering](https://rclone.org/filtering/#metadata). + + rclone sync ... --metadata-exclude \[dq]source=metadata\[dq] --metadata-exclude \[dq]format=Metadata\[dq] + Which excludes from the sync any files which have the -\f[C]source=metadata\f[R] or \f[C]format=Metadata\f[R] flags which are -added to Internet Archive auto-created files. -.SS Configuration -.PP +\[ga]source=metadata\[ga] or \[ga]format=Metadata\[ga] flags which are added to +Internet Archive auto-created files. + +## Configuration + Here is an example of making an internetarchive configuration. -Most applies to the other providers as well, any differences are -described below. -.PP +Most applies to the other providers as well, any differences are described [below](#providers). + First run -.IP -.nf -\f[C] -rclone config + + rclone config + +This will guide you through an interactive setup process. \f[R] .fi .PP -This will guide you through an interactive setup process. -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -XX / InternetArchive Items - \[rs] (internetarchive) -Storage> internetarchive +XX / InternetArchive Items \ (internetarchive) Storage> internetarchive Option access_key_id. IAS3 Access Key. Leave blank for anonymous access. -You can find one here: https://archive.org/account/s3.php -Enter a value. Press Enter to leave empty. -access_key_id> XXXX -Option secret_access_key. +You can find one here: https://archive.org/account/s3.php Enter a value. +Press Enter to leave empty. +access_key_id> XXXX Option secret_access_key. IAS3 Secret Key (password). Leave blank for anonymous access. -Enter a value. Press Enter to leave empty. -secret_access_key> XXXX -Edit advanced config? -y) Yes -n) No (default) -y/n> y -Option endpoint. +Enter a value. +Press Enter to leave empty. +secret_access_key> XXXX Edit advanced config? +y) Yes n) No (default) y/n> y Option endpoint. IAS3 Endpoint. Leave blank for default value. -Enter a string value. Press Enter for the default (https://s3.us.archive.org). -endpoint> -Option front_endpoint. +Enter a string value. +Press Enter for the default (https://s3.us.archive.org). +endpoint> Option front_endpoint. Host of InternetArchive Frontend. Leave blank for default value. -Enter a string value. Press Enter for the default (https://archive.org). -front_endpoint> -Option disable_checksum. +Enter a string value. +Press Enter for the default (https://archive.org). +front_endpoint> Option disable_checksum. Don\[aq]t store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can ask the server to check the object against checksum. -This is great for data integrity checking but can cause long delays for -large files to start uploading. -Enter a boolean value (true or false). Press Enter for the default (true). -disable_checksum> true -Option encoding. -The encoding for the backend. -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. -Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). -encoding> -Edit advanced config? -y) Yes -n) No (default) -y/n> n --------------------- -[remote] -type = internetarchive -access_key_id = XXXX -secret_access_key = XXXX --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS Standard options -.PP -Here are the Standard options specific to internetarchive (Internet -Archive). -.SS --internetarchive-access-key-id -.PP -IAS3 Access Key. -.PP -Leave blank for anonymous access. -You can find one here: https://archive.org/account/s3.php -.PP -Properties: -.IP \[bu] 2 -Config: access_key_id -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --internetarchive-secret-access-key -.PP -IAS3 Secret Key (password). -.PP -Leave blank for anonymous access. -.PP -Properties: -.IP \[bu] 2 -Config: secret_access_key -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP -Here are the Advanced options specific to internetarchive (Internet -Archive). -.SS --internetarchive-endpoint -.PP -IAS3 Endpoint. -.PP -Leave blank for default value. -.PP -Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]https://s3.us.archive.org\[dq] -.SS --internetarchive-front-endpoint -.PP -Host of InternetArchive Frontend. -.PP -Leave blank for default value. -.PP -Properties: -.IP \[bu] 2 -Config: front_endpoint -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]https://archive.org\[dq] -.SS --internetarchive-disable-checksum -.PP -Don\[aq]t ask the server to test against MD5 checksum calculated by -rclone. -Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading. -.PP -Properties: -.IP \[bu] 2 -Config: disable_checksum -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.SS --internetarchive-wait-archive -.PP -Timeout for waiting the server\[aq]s processing tasks (specifically -archive and book_op) to finish. -Only enable if you need to be guaranteed to be reflected after write -operations. -0 to disable waiting. -No errors to be thrown in case of timeout. -.PP -Properties: -.IP \[bu] 2 -Config: wait_archive -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 0s -.SS --internetarchive-encoding -.PP +Enter a boolean value (true or false). +Press Enter for the default (true). +disable_checksum> true Option encoding. The encoding for the backend. -.PP See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info. -.PP +Enter a encoder.MultiEncoder value. +Press Enter for the default +(Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). +encoding> Edit advanced config? +y) Yes n) No (default) y/n> n -------------------- [remote] type = +internetarchive access_key_id = XXXX secret_access_key = XXXX +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y +.IP +.nf +\f[C] + +### Standard options + +Here are the Standard options specific to internetarchive (Internet Archive). + +#### --internetarchive-access-key-id + +IAS3 Access Key. + +Leave blank for anonymous access. +You can find one here: https://archive.org/account/s3.php + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_INTERNETARCHIVE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot -.SS Metadata -.PP + +- Config: access_key_id +- Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID +- Type: string +- Required: false + +#### --internetarchive-secret-access-key + +IAS3 Secret Key (password). + +Leave blank for anonymous access. + +Properties: + +- Config: secret_access_key +- Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to internetarchive (Internet Archive). + +#### --internetarchive-endpoint + +IAS3 Endpoint. + +Leave blank for default value. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT +- Type: string +- Default: \[dq]https://s3.us.archive.org\[dq] + +#### --internetarchive-front-endpoint + +Host of InternetArchive Frontend. + +Leave blank for default value. + +Properties: + +- Config: front_endpoint +- Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT +- Type: string +- Default: \[dq]https://archive.org\[dq] + +#### --internetarchive-disable-checksum + +Don\[aq]t ask the server to test against MD5 checksum calculated by rclone. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against checksum. +This is great for data integrity checking but can cause long delays for +large files to start uploading. + +Properties: + +- Config: disable_checksum +- Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM +- Type: bool +- Default: true + +#### --internetarchive-wait-archive + +Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish. +Only enable if you need to be guaranteed to be reflected after write operations. +0 to disable waiting. No errors to be thrown in case of timeout. + +Properties: + +- Config: wait_archive +- Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE +- Type: Duration +- Default: 0s + +#### --internetarchive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_INTERNETARCHIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + +### Metadata + Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key. -.PP -Owner is able to add custom keys. -Metadata feature grabs all the keys including them. -.PP -Here are the possible system metadata items for the internetarchive -backend. -.PP -.TS -tab(@); -lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). -T{ -Name -T}@T{ -Help -T}@T{ -Type -T}@T{ -Example -T}@T{ -Read Only -T} -_ -T{ -crc32 -T}@T{ -CRC32 calculated by Internet Archive -T}@T{ -string -T}@T{ -01234567 -T}@T{ -\f[B]Y\f[R] -T} -T{ -format -T}@T{ -Name of format identified by Internet Archive -T}@T{ -string -T}@T{ -Comma-Separated Values -T}@T{ -\f[B]Y\f[R] -T} -T{ -md5 -T}@T{ -MD5 hash calculated by Internet Archive -T}@T{ -string -T}@T{ -01234567012345670123456701234567 -T}@T{ -\f[B]Y\f[R] -T} -T{ -mtime -T}@T{ -Time of last modification, managed by Rclone -T}@T{ -RFC 3339 -T}@T{ -2006-01-02T15:04:05.999999999Z -T}@T{ -\f[B]Y\f[R] -T} -T{ -name -T}@T{ -Full file path, without the bucket part -T}@T{ -filename -T}@T{ -backend/internetarchive/internetarchive.go -T}@T{ -\f[B]Y\f[R] -T} -T{ -old_version -T}@T{ -Whether the file was replaced and moved by keep-old-version flag -T}@T{ -boolean -T}@T{ -true -T}@T{ -\f[B]Y\f[R] -T} -T{ -rclone-ia-mtime -T}@T{ -Time of last modification, managed by Internet Archive -T}@T{ -RFC 3339 -T}@T{ -2006-01-02T15:04:05.999999999Z -T}@T{ -N -T} -T{ -rclone-mtime -T}@T{ -Time of last modification, managed by Rclone -T}@T{ -RFC 3339 -T}@T{ -2006-01-02T15:04:05.999999999Z -T}@T{ -N -T} -T{ -rclone-update-track -T}@T{ -Random value used by Rclone for tracking changes inside Internet Archive -T}@T{ -string -T}@T{ -aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -T}@T{ -N -T} -T{ -sha1 -T}@T{ -SHA1 hash calculated by Internet Archive -T}@T{ -string -T}@T{ -0123456701234567012345670123456701234567 -T}@T{ -\f[B]Y\f[R] -T} -T{ -size -T}@T{ -File size in bytes -T}@T{ -decimal number -T}@T{ -123456 -T}@T{ -\f[B]Y\f[R] -T} -T{ -source -T}@T{ -The source of the file -T}@T{ -string -T}@T{ -original -T}@T{ -\f[B]Y\f[R] -T} -T{ -summation -T}@T{ -Check https://forum.rclone.org/t/31922 for how it is used -T}@T{ -string -T}@T{ -md5 -T}@T{ -\f[B]Y\f[R] -T} -T{ -viruscheck -T}@T{ -The last time viruscheck process was run for the file (?) -T}@T{ -unixtime -T}@T{ -1654191352 -T}@T{ -\f[B]Y\f[R] -T} -.TE -.PP -See the metadata (https://rclone.org/docs/#metadata) docs for more info. -.SH Jottacloud -.PP -Jottacloud is a cloud storage service provider from a Norwegian company, -using its own datacenters in Norway. -In addition to the official service at -jottacloud.com (https://www.jottacloud.com/), it also provides -white-label solutions to different companies, such as: * Telia * Telia -Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud -(mittcloud.tele2.se) * Elkj\[/o]p (with subsidiaries): * Elkj\[/o]p -Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * -Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud -(cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is) -.PP -Most of the white-label versions are supported by this backend, although -may require different authentication setup - described below. -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Authentication types -.PP -Some of the whitelabel versions uses a different authentication method -than the official service, and you have to choose the correct one when -setting up the remote. -.SS Standard authentication -.PP -The standard authentication method used by the official service -(jottacloud.com), as well as some of the whitelabel services, requires -you to generate a single-use personal login token from the account -security settings in the service\[aq]s web interface. -Log in to your account, go to \[dq]Settings\[dq] and then -\[dq]Security\[dq], or use the direct link presented to you by rclone -when configuring the remote: . -Scroll down to the section \[dq]Personal login token\[dq], and click the -\[dq]Generate\[dq] button. -Note that if you are using a whitelabel service you probably can\[aq]t -use the direct link, you need to find the same page in their dedicated -web interface, and also it may be in a different location than described -above. -.PP -To access your account from multiple instances of rclone, you need to -configure each of them with a separate personal login token. -E.g. -you create a Jottacloud remote with rclone in one location, and copy the -configuration file to a second location where you also want to run -rclone and access the same remote. -Then you need to replace the token for one of them, using the config -reconnect (https://rclone.org/commands/rclone_config_reconnect/) -command, which requires you to generate a new personal login token and -supply as input. -If you do not do this, the token may easily end up being invalidated, -resulting in both instances failing with an error message something -along the lines of: -.IP -.nf -\f[C] -oauth2: cannot fetch token: 400 Bad Request -Response: {\[dq]error\[dq]:\[dq]invalid_grant\[dq],\[dq]error_description\[dq]:\[dq]Stale token\[dq]} -\f[R] -.fi -.PP -When this happens, you need to replace the token as described above to -be able to use your remote again. -.PP -All personal login tokens you have taken into use will be listed in the -web interface under \[dq]My logged in devices\[dq], and from the right -side of that list you can click the \[dq]X\[dq] button to revoke -individual tokens. -.SS Legacy authentication -.PP -If you are using one of the whitelabel versions (e.g. -from Elkj\[/o]p) you may not have the option to generate a CLI token. -In this case you\[aq]ll have to use the legacy authentication. -To do this select yes when the setup asks for legacy authentication and -enter your username and password. + +Owner is able to add custom keys. Metadata feature grabs all the keys including them. + +Here are the possible system metadata items for the internetarchive backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** | +| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** | +| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** | +| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** | +| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** | +| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** | +| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N | +| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** | +| size | File size in bytes | decimal number | 123456 | **Y** | +| source | The source of the file | string | original | **Y** | +| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** | +| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + +# Jottacloud + +Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters +in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), +it also provides white-label solutions to different companies, such as: +* Telia + * Telia Cloud (cloud.telia.se) + * Telia Sky (sky.telia.no) +* Tele2 + * Tele2 Cloud (mittcloud.tele2.se) +* Onlime + * Onlime Cloud Storage (onlime.dk) +* Elkj\[/o]p (with subsidiaries): + * Elkj\[/o]p Cloud (cloud.elkjop.no) + * Elgiganten Sweden (cloud.elgiganten.se) + * Elgiganten Denmark (cloud.elgiganten.dk) + * Giganti Cloud (cloud.gigantti.fi) + * ELKO Cloud (cloud.elko.is) + +Most of the white-label versions are supported by this backend, although may require different +authentication setup - described below. + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Authentication types + +Some of the whitelabel versions uses a different authentication method than the official service, +and you have to choose the correct one when setting up the remote. + +### Standard authentication + +The standard authentication method used by the official service (jottacloud.com), as well as +some of the whitelabel services, requires you to generate a single-use personal login token +from the account security settings in the service\[aq]s web interface. Log in to your account, +go to \[dq]Settings\[dq] and then \[dq]Security\[dq], or use the direct link presented to you by rclone when +configuring the remote: . Scroll down to the section +\[dq]Personal login token\[dq], and click the \[dq]Generate\[dq] button. Note that if you are using a +whitelabel service you probably can\[aq]t use the direct link, you need to find the same page in +their dedicated web interface, and also it may be in a different location than described above. + +To access your account from multiple instances of rclone, you need to configure each of them +with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one +location, and copy the configuration file to a second location where you also want to run +rclone and access the same remote. Then you need to replace the token for one of them, using +the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which +requires you to generate a new personal login token and supply as input. If you do not +do this, the token may easily end up being invalidated, resulting in both instances failing +with an error message something along the lines of: + + oauth2: cannot fetch token: 400 Bad Request + Response: {\[dq]error\[dq]:\[dq]invalid_grant\[dq],\[dq]error_description\[dq]:\[dq]Stale token\[dq]} + +When this happens, you need to replace the token as described above to be able to use your +remote again. + +All personal login tokens you have taken into use will be listed in the web interface under +\[dq]My logged in devices\[dq], and from the right side of that list you can click the \[dq]X\[dq] button to +revoke individual tokens. + +### Legacy authentication + +If you are using one of the whitelabel versions (e.g. from Elkj\[/o]p) you may not have the option +to generate a CLI token. In this case you\[aq]ll have to use the legacy authentication. To do this select +yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup. -.SS Telia Cloud authentication -.PP -Similar to other whitelabel versions Telia Cloud doesn\[aq]t offer the -option of creating a CLI token, and additionally uses a separate -authentication flow where the username is generated internally. -To setup rclone to use Telia Cloud, choose Telia Cloud authentication in -the setup. -The rest of the setup is identical to the default setup. -.SS Tele2 Cloud authentication -.PP -As Tele2-Com Hem merger was completed this authentication can be used -for former Com Hem Cloud and Tele2 Cloud customers as no support for -creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. -To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in -the setup. -The rest of the setup is identical to the default setup. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R] with -the default setup. -First run: -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP + +### Telia Cloud authentication + +Similar to other whitelabel versions Telia Cloud doesn\[aq]t offer the option of creating a CLI token, and +additionally uses a separate authentication flow where the username is generated internally. To setup +rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is +identical to the default setup. + +### Tele2 Cloud authentication + +As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and +Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate +authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, +choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup. + +### Onlime Cloud Storage authentication + +Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but +have recently set up their own hosting, transferring their customers from Jottacloud servers to their +own ones. + +This, of course, necessitates using their servers for authentication, but otherwise functionality and +architecture seems equivalent to Jottacloud. + +To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest +of the setup is identical to the default setup. + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga] with the default setup. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -[snip] -XX / Jottacloud - \[rs] (jottacloud) -[snip] -Storage> jottacloud -Edit advanced config? -y) Yes -n) No (default) -y/n> n -Option config_type. +[snip] XX / Jottacloud \ (jottacloud) [snip] Storage> jottacloud Edit +advanced config? +y) Yes n) No (default) y/n> n Option config_type. Select authentication type. Choose a number from below, or type in an existing string value. Press Enter for the default (standard). - / Standard authentication. - 1 | Use this if you\[aq]re a normal Jottacloud user. - \[rs] (standard) - / Legacy authentication. - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. - \[rs] (legacy) - / Telia Cloud authentication. - 3 | Use this if you are using Telia Cloud. - \[rs] (telia) - / Tele2 Cloud authentication. - 4 | Use this if you are using Tele2 Cloud. - \[rs] (tele2) -config_type> 1 -Personal login token. -Generate here: https://www.jottacloud.com/web/secure -Login Token> -Use a non-standard device/mountpoint? -Choosing no, the default, will let you access the storage used for the archive -section of the official Jottacloud client. If you instead want to access the -sync or the backup section, for example, you must choose yes. -y) Yes -n) No (default) -y/n> y -Option config_device. -The device to use. In standard setup the built-in Jotta device is used, -which contains predefined mountpoints for archive, sync etc. All other devices -are treated as backup devices by the official Jottacloud client. You may create -a new by entering a unique name. +/ Standard authentication. +1 | Use this if you\[aq]re a normal Jottacloud user. +\ (standard) / Legacy authentication. +2 | This is only required for certain whitelabel versions of Jottacloud +and not recommended for normal users. +\ (legacy) / Telia Cloud authentication. +3 | Use this if you are using Telia Cloud. +\ (telia) / Tele2 Cloud authentication. +4 | Use this if you are using Tele2 Cloud. +\ (tele2) / Onlime Cloud authentication. +5 | Use this if you are using Onlime Cloud. +\ (onlime) config_type> 1 Personal login token. +Generate here: https://www.jottacloud.com/web/secure Login Token> Use a +non-standard device/mountpoint? +Choosing no, the default, will let you access the storage used for the +archive section of the official Jottacloud client. +If you instead want to access the sync or the backup section, for +example, you must choose yes. +y) Yes n) No (default) y/n> y Option config_device. +The device to use. +In standard setup the built-in Jotta device is used, which contains +predefined mountpoints for archive, sync etc. +All other devices are treated as backup devices by the official +Jottacloud client. +You may create a new by entering a unique name. Choose a number from below, or type in your own string value. Press Enter for the default (DESKTOP-3H31129). - 1 > DESKTOP-3H31129 - 2 > Jotta -config_device> 2 -Option config_mountpoint. +1 > DESKTOP-3H31129 2 > Jotta config_device> 2 Option config_mountpoint. The mountpoint to use for the built-in device Jotta. -The standard setup is to use the Archive mountpoint. Most other mountpoints -have very limited support in rclone and should generally be avoided. +The standard setup is to use the Archive mountpoint. +Most other mountpoints have very limited support in rclone and should +generally be avoided. Choose a number from below, or type in an existing string value. Press Enter for the default (Archive). - 1 > Archive - 2 > Shared - 3 > Sync -config_mountpoint> 1 --------------------- -[remote] -type = jottacloud -configVersion = 1 -client_id = jottacli -client_secret = -tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token -token = {........} -username = 2940e57271a93d987d6f8a21 -device = Jotta -mountpoint = Archive --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP +1 > Archive 2 > Shared 3 > Sync config_mountpoint> 1 +-------------------- [remote] type = jottacloud configVersion = 1 +client_id = jottacli client_secret = tokenURL = +https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token +token = {........} username = 2940e57271a93d987d6f8a21 device = Jotta +mountpoint = Archive -------------------- y) Yes this is OK (default) e) +Edit this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +Once configured you can then use \[ga]rclone\[ga] like this, + List directories in top level of your Jottacloud -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your Jottacloud -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to an Jottacloud directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Devices and Mountpoints -.PP -The official Jottacloud client registers a device for each computer you -install it on, and shows them in the backup section of the user -interface. -For each folder you select for backup it will create a mountpoint within -this device. -A built-in device called Jotta is special, and contains mountpoints -Archive, Sync and some others, used for corresponding features in -official clients. -.PP -With rclone you\[aq]ll want to use the standard Jotta/Archive -device/mountpoint in most cases. -However, you may for example want to access files from the sync or -backup functionality provided by the official clients, and rclone -therefore provides the option to select other devices and mountpoints -during config. -.PP -You are allowed to create new devices and mountpoints. -All devices except the built-in Jotta device are treated as backup -devices by official Jottacloud clients, and the mountpoints on them are -individual backup sets. -.PP -With the built-in Jotta device, only existing, built-in, mountpoints can -be selected. -In addition to the mentioned Archive and Sync, it may contain several -other mountpoints such as: Latest, Links, Shared and Trash. -All of these are special mountpoints with a different internal -representation than the \[dq]regular\[dq] mountpoints. -Rclone will only to a very limited degree support them. -Generally you should avoid these, unless you know what you are doing. -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.PP -Note that the implementation in Jottacloud always uses only a single API -request to get the entire list, so for large folders this could lead to -long wait time before the first results are shown. -.PP -Note also that with rclone version 1.58 and newer information about MIME -types (https://rclone.org/overview/#mime-type) are not available when -using \f[C]--fast-list\f[R]. -.SS Modified time and hashes -.PP + + rclone copy /home/source remote:backup + +### Devices and Mountpoints + +The official Jottacloud client registers a device for each computer you install +it on, and shows them in the backup section of the user interface. For each +folder you select for backup it will create a mountpoint within this device. +A built-in device called Jotta is special, and contains mountpoints Archive, +Sync and some others, used for corresponding features in official clients. + +With rclone you\[aq]ll want to use the standard Jotta/Archive device/mountpoint in +most cases. However, you may for example want to access files from the sync or +backup functionality provided by the official clients, and rclone therefore +provides the option to select other devices and mountpoints during config. + +You are allowed to create new devices and mountpoints. All devices except the +built-in Jotta device are treated as backup devices by official Jottacloud +clients, and the mountpoints on them are individual backup sets. + +With the built-in Jotta device, only existing, built-in, mountpoints can be +selected. In addition to the mentioned Archive and Sync, it may contain +several other mountpoints such as: Latest, Links, Shared and Trash. All of +these are special mountpoints with a different internal representation than +the \[dq]regular\[dq] mountpoints. Rclone will only to a very limited degree support +them. Generally you should avoid these, unless you know what you are doing. + +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +Note that the implementation in Jottacloud always uses only a single +API request to get the entire list, so for large folders this could +lead to long wait time before the first results are shown. + +Note also that with rclone version 1.58 and newer information about +[MIME types](https://rclone.org/overview/#mime-type) are not available when using \[ga]--fast-list\[ga]. + +### Modified time and hashes + Jottacloud allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -.PP -Jottacloud supports MD5 type hashes, so you can use the -\f[C]--checksum\f[R] flag. -.PP +second. These will be used to detect whether objects need syncing or +not. + +Jottacloud supports MD5 type hashes, so you can use the \[ga]--checksum\[ga] +flag. + Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by ---temp-dir (https://rclone.org/docs/#temp-dir-dir)) before it is -uploaded. +[--temp-dir](https://rclone.org/docs/#temp-dir-dir)) before it is uploaded. Small files will be cached in memory - see the ---jottacloud-md5-memory-limit flag. +[--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag. When uploading from local disk the source checksum is always available, -so this does not apply. -Starting with rclone version 1.52 the same is true for encrypted remotes -(in older versions the crypt backend would not calculate hashes for -uploads from local disk, so the Jottacloud backend had to do it as -described above). -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -T{ -* -T}@T{ -0x2A -T}@T{ -\[uFF0A] -T} -T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] -T} -T{ -< -T}@T{ -0x3C -T}@T{ -\[uFF1C] -T} -T{ -> -T}@T{ -0x3E -T}@T{ -\[uFF1E] -T} -T{ -? -T}@T{ -0x3F -T}@T{ -\[uFF1F] -T} -T{ -| -T}@T{ -0x7C -T}@T{ -\[uFF5C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in XML strings. -.SS Deleting files -.PP -By default, rclone will send all files to the trash when deleting files. -They will be permanently deleted automatically after 30 days. -You may bypass the trash and permanently delete files immediately by -using the --jottacloud-hard-delete flag, or set the equivalent -environment variable. -Emptying the trash is supported by the -cleanup (https://rclone.org/commands/rclone_cleanup/) command. -.SS Versions -.PP -Jottacloud supports file versioning. -When rclone uploads a new version of a file it creates a new version of -it. -Currently rclone only supports retrieving the current version but older -versions can be accessed via the Jottacloud Website. -.PP -Versioning can be disabled by \f[C]--jottacloud-no-versions\f[R] option. -This is achieved by deleting the remote file prior to uploading a new -version. -If the upload the fails no version of the file will be available in the -remote. -.SS Quota information -.PP -To view your current quota you can use the -\f[C]rclone about remote:\f[R] command which will display your usage -limit (unless it is unlimited) and the current usage. -.SS Advanced options -.PP +so this does not apply. Starting with rclone version 1.52 the same is +true for encrypted remotes (in older versions the crypt backend would not +calculate hashes for uploads from local disk, so the Jottacloud +backend had to do it as described above). + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[dq] | 0x22 | \[uFF02] | +| * | 0x2A | \[uFF0A] | +| : | 0x3A | \[uFF1A] | +| < | 0x3C | \[uFF1C] | +| > | 0x3E | \[uFF1E] | +| ? | 0x3F | \[uFF1F] | +| \[rs]| | 0x7C | \[uFF5C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in XML strings. + +### Deleting files + +By default, rclone will send all files to the trash when deleting files. They will be permanently +deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately +by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. +Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command. + +### Versions + +Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. +Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. + +Versioning can be disabled by \[ga]--jottacloud-no-versions\[ga] option. This is achieved by deleting the remote file prior to uploading +a new version. If the upload the fails no version of the file will be available in the remote. + +### Quota information + +To view your current quota you can use the \[ga]rclone about remote:\[ga] +command which will display your usage limit (unless it is unlimited) +and the current usage. + + +### Standard options + +Here are the Standard options specific to jottacloud (Jottacloud). + +#### --jottacloud-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_JOTTACLOUD_CLIENT_ID +- Type: string +- Required: false + +#### --jottacloud-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_JOTTACLOUD_CLIENT_SECRET +- Type: string +- Required: false + +### Advanced options + Here are the Advanced options specific to jottacloud (Jottacloud). -.SS --jottacloud-md5-memory-limit -.PP -Files bigger than this will be cached on disk to calculate the MD5 if -required. -.PP + +#### --jottacloud-token + +OAuth Access Token as a JSON blob. + Properties: -.IP \[bu] 2 -Config: md5_memory_limit -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Mi -.SS --jottacloud-trashed-only -.PP + +- Config: token +- Env Var: RCLONE_JOTTACLOUD_TOKEN +- Type: string +- Required: false + +#### --jottacloud-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_JOTTACLOUD_AUTH_URL +- Type: string +- Required: false + +#### --jottacloud-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_JOTTACLOUD_TOKEN_URL +- Type: string +- Required: false + +#### --jottacloud-md5-memory-limit + +Files bigger than this will be cached on disk to calculate the MD5 if required. + +Properties: + +- Config: md5_memory_limit +- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT +- Type: SizeSuffix +- Default: 10Mi + +#### --jottacloud-trashed-only + Only show files that are in the trash. -.PP + This will show trashed files in their original directory structure. -.PP + Properties: -.IP \[bu] 2 -Config: trashed_only -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --jottacloud-hard-delete -.PP + +- Config: trashed_only +- Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY +- Type: bool +- Default: false + +#### --jottacloud-hard-delete + Delete files permanently rather than putting them into the trash. -.PP + Properties: -.IP \[bu] 2 -Config: hard_delete -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_HARD_DELETE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --jottacloud-upload-resume-limit -.PP + +- Config: hard_delete +- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE +- Type: bool +- Default: false + +#### --jottacloud-upload-resume-limit + Files bigger than this can be resumed if the upload fail\[aq]s. -.PP + Properties: -.IP \[bu] 2 -Config: upload_resume_limit -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Mi -.SS --jottacloud-no-versions -.PP -Avoid server side versioning by deleting files and recreating files -instead of overwriting them. -.PP + +- Config: upload_resume_limit +- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT +- Type: SizeSuffix +- Default: 10Mi + +#### --jottacloud-no-versions + +Avoid server side versioning by deleting files and recreating files instead of overwriting them. + Properties: -.IP \[bu] 2 -Config: no_versions -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --jottacloud-encoding -.PP + +- Config: no_versions +- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS +- Type: bool +- Default: false + +#### --jottacloud-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.PP -Note that Jottacloud is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -There are quite a few characters that can\[aq]t be in Jottacloud file -names. -Rclone will map these names to and from an identical looking unicode -equivalent. -For example if a file has a ? -in it will be mapped to \[uFF1F] instead. -.PP + +- Config: encoding +- Env Var: RCLONE_JOTTACLOUD_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +Note that Jottacloud is case insensitive so you can\[aq]t have a file called +\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + +There are quite a few characters that can\[aq]t be in Jottacloud file names. Rclone will map these names to and from an identical +looking unicode equivalent. For example if a file has a ? in it will be mapped to \[uFF1F] instead. + Jottacloud only supports filenames up to 255 characters in length. -.SS Troubleshooting -.PP -Jottacloud exhibits some inconsistent behaviours regarding deleted files -and folders which may cause Copy, Move and DirMove operations to -previously deleted paths to fail. -Emptying the trash should help in such cases. -.SH Koofr -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -The initial setup for Koofr involves creating an application password -for rclone. -You can do that by opening the Koofr web -application (https://app.koofr.net/app/admin/preferences/password), -giving the password a nice name like \f[C]rclone\f[R] and clicking on -generate. -.PP -Here is an example of how to make a remote called \f[C]koofr\f[R]. -First run: -.IP -.nf -\f[C] - rclone config + +## Troubleshooting + +Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove +operations to previously deleted paths to fail. Emptying the trash should help in such cases. + +# Koofr + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +The initial setup for Koofr involves creating an application password for +rclone. You can do that by opening the Koofr +[web application](https://app.koofr.net/app/admin/preferences/password), +giving the password a nice name like \[ga]rclone\[ga] and clicking on generate. + +Here is an example of how to make a remote called \[ga]koofr\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> koofr -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> koofr Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -[snip] -22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \[rs] (koofr) -[snip] -Storage> koofr -Option provider. +[snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage +providers \ (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \[rs] (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \[rs] (digistorage) - 3 / Any other Koofr API compatible storage service - \[rs] (other) -provider> 1 +1 / Koofr, https://app.koofr.net/ \ (koofr) 2 / Digi Storage, +https://storage.rcs-rds.ro/ \ (digistorage) 3 / Any other Koofr API +compatible storage service \ (other) provider> 1 +.PD 0 +.P +.PD Option user. Your user name. Enter a value. -user> USERNAME -Option password. -Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). -Choose an alternative below. -y) Yes, type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Edit advanced config? -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[koofr] -type = koofr -provider = koofr -user = USERNAME -password = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -You can choose to edit advanced config in order to enter your own -service URL if you use an on-premise or white label Koofr instance, or -choose an alternative mount instead of your primary storage. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your Koofr -.IP -.nf -\f[C] -rclone lsd koofr: -\f[R] -.fi -.PP -List all the files in your Koofr -.IP -.nf -\f[C] -rclone ls koofr: -\f[R] -.fi -.PP -To copy a local directory to an Koofr directory called backup -.IP -.nf -\f[C] -rclone copy /home/source koofr:backup -\f[R] -.fi -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in XML strings. -.SS Standard options -.PP -Here are the Standard options specific to koofr (Koofr, Digi Storage and -other Koofr-compatible storage providers). -.SS --koofr-provider -.PP -Choose your storage provider. -.PP -Properties: -.IP \[bu] 2 -Config: provider -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_PROVIDER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]koofr\[dq] -.RS 2 -.IP \[bu] 2 -Koofr, https://app.koofr.net/ -.RE -.IP \[bu] 2 -\[dq]digistorage\[dq] -.RS 2 -.IP \[bu] 2 -Digi Storage, https://storage.rcs-rds.ro/ -.RE -.IP \[bu] 2 -\[dq]other\[dq] -.RS 2 -.IP \[bu] 2 -Any other Koofr API compatible storage service -.RE -.RE -.SS --koofr-endpoint -.PP -The Koofr API endpoint to use. -.PP -Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_ENDPOINT -.IP \[bu] 2 -Provider: other -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --koofr-user -.PP -Your user name. -.PP -Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --koofr-password -.PP +user> USERNAME Option password. Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP +Choose an alternative below. +y) Yes, type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: Edit advanced +config? +y) Yes n) No (default) y/n> n Remote config -------------------- [koofr] +type = koofr provider = koofr user = USERNAME password = *** ENCRYPTED +*** -------------------- y) Yes this is OK (default) e) Edit this remote +d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +You can choose to edit advanced config in order to enter your own service URL +if you use an on-premise or white label Koofr instance, or choose an alternative +mount instead of your primary storage. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your Koofr + + rclone lsd koofr: + +List all the files in your Koofr + + rclone ls koofr: + +To copy a local directory to an Koofr directory called backup + + rclone copy /home/source koofr:backup + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in XML strings. + + +### Standard options + +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). + +#### --koofr-provider + +Choose your storage provider. + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_PASSWORD -.IP \[bu] 2 -Provider: koofr -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --koofr-password -.PP -Your password for rclone (generate one at -https://storage.rcs-rds.ro/app/admin/preferences/password). -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +- Config: provider +- Env Var: RCLONE_KOOFR_PROVIDER +- Type: string +- Required: false +- Examples: + - \[dq]koofr\[dq] + - Koofr, https://app.koofr.net/ + - \[dq]digistorage\[dq] + - Digi Storage, https://storage.rcs-rds.ro/ + - \[dq]other\[dq] + - Any other Koofr API compatible storage service + +#### --koofr-endpoint + +The Koofr API endpoint to use. + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_PASSWORD -.IP \[bu] 2 -Provider: digistorage -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --koofr-password -.PP -Your password for rclone (generate one at your service\[aq]s settings -page). -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +- Config: endpoint +- Env Var: RCLONE_KOOFR_ENDPOINT +- Provider: other +- Type: string +- Required: true + +#### --koofr-user + +Your user name. + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_PASSWORD -.IP \[bu] 2 -Provider: other -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP -Here are the Advanced options specific to koofr (Koofr, Digi Storage and -other Koofr-compatible storage providers). -.SS --koofr-mountid -.PP + +- Config: user +- Env Var: RCLONE_KOOFR_USER +- Type: string +- Required: true + +#### --koofr-password + +Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password). + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_KOOFR_PASSWORD +- Provider: koofr +- Type: string +- Required: true + +#### --koofr-password + +Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password). + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_KOOFR_PASSWORD +- Provider: digistorage +- Type: string +- Required: true + +#### --koofr-password + +Your password for rclone (generate one at your service\[aq]s settings page). + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_KOOFR_PASSWORD +- Provider: other +- Type: string +- Required: true + +### Advanced options + +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). + +#### --koofr-mountid + Mount ID of the mount to use. -.PP + If omitted, the primary mount is used. -.PP + Properties: -.IP \[bu] 2 -Config: mountid -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_MOUNTID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --koofr-setmtime -.PP + +- Config: mountid +- Env Var: RCLONE_KOOFR_MOUNTID +- Type: string +- Required: false + +#### --koofr-setmtime + Does the backend support setting modification time. -.PP -Set this to false if you use a mount ID that points to a Dropbox or -Amazon Drive backend. -.PP + +Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. + Properties: -.IP \[bu] 2 -Config: setmtime -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_SETMTIME -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.SS --koofr-encoding -.PP + +- Config: setmtime +- Env Var: RCLONE_KOOFR_SETMTIME +- Type: bool +- Default: true + +#### --koofr-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_KOOFR_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.PP + +- Config: encoding +- Env Var: RCLONE_KOOFR_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + Note that Koofr is case insensitive so you can\[aq]t have a file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.SS Providers -.SS Koofr -.PP -This is the original Koofr (https://koofr.eu) storage provider used as -main example and described in the configuration section above. -.SS Digi Storage -.PP -Digi Storage (https://www.digi.ro/servicii/online/digi-storage) is a -cloud storage service run by Digi.ro (https://www.digi.ro/) that + +## Providers + +### Koofr + +This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above. + +### Digi Storage + +[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that provides a Koofr API. -.PP -Here is an example of how to make a remote called \f[C]ds\f[R]. -First run: -.IP -.nf -\f[C] - rclone config + +Here is an example of how to make a remote called \[ga]ds\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> ds -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> ds Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -[snip] -22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \[rs] (koofr) -[snip] -Storage> koofr -Option provider. +[snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage +providers \ (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \[rs] (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \[rs] (digistorage) - 3 / Any other Koofr API compatible storage service - \[rs] (other) -provider> 2 -Option user. +1 / Koofr, https://app.koofr.net/ \ (koofr) 2 / Digi Storage, +https://storage.rcs-rds.ro/ \ (digistorage) 3 / Any other Koofr API +compatible storage service \ (other) provider> 2 Option user. Your user name. Enter a value. -user> USERNAME -Option password. -Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password). +user> USERNAME Option password. +Your password for rclone (generate one at +https://storage.rcs-rds.ro/app/admin/preferences/password). Choose an alternative below. -y) Yes, type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Edit advanced config? -y) Yes -n) No (default) -y/n> n --------------------- -[ds] -type = koofr -provider = digistorage -user = USERNAME -password = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SS Other -.PP -You may also want to use another, public or private storage provider -that runs a Koofr API compatible service, by simply providing the base -URL to connect to. -.PP -Here is an example of how to make a remote called \f[C]other\f[R]. -First run: +y) Yes, type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: Edit advanced +config? +y) Yes n) No (default) y/n> n -------------------- [ds] type = koofr +provider = digistorage user = USERNAME password = *** ENCRYPTED *** +-------------------- y) Yes this is OK (default) e) Edit this remote d) +Delete this remote y/e/d> y .IP .nf \f[C] - rclone config -\f[R] -.fi -.PP +### Other + +You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to. + +Here is an example of how to make a remote called \[ga]other\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> other -Option Storage. +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> other Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -[snip] -22 / Koofr, Digi Storage and other Koofr-compatible storage providers - \[rs] (koofr) -[snip] -Storage> koofr -Option provider. +[snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage +providers \ (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / Koofr, https://app.koofr.net/ - \[rs] (koofr) - 2 / Digi Storage, https://storage.rcs-rds.ro/ - \[rs] (digistorage) - 3 / Any other Koofr API compatible storage service - \[rs] (other) -provider> 3 -Option endpoint. +1 / Koofr, https://app.koofr.net/ \ (koofr) 2 / Digi Storage, +https://storage.rcs-rds.ro/ \ (digistorage) 3 / Any other Koofr API +compatible storage service \ (other) provider> 3 Option endpoint. The Koofr API endpoint to use. Enter a value. -endpoint> https://koofr.other.org -Option user. +endpoint> https://koofr.other.org Option user. Your user name. Enter a value. -user> USERNAME -Option password. -Your password for rclone (generate one at your service\[aq]s settings page). +user> USERNAME Option password. +Your password for rclone (generate one at your service\[aq]s settings +page). Choose an alternative below. -y) Yes, type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Edit advanced config? -y) Yes -n) No (default) -y/n> n --------------------- -[other] -type = koofr -provider = other -endpoint = https://koofr.other.org -user = USERNAME -password = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.SH Mail.ru Cloud -.PP -Mail.ru Cloud (https://cloud.mail.ru/) is a cloud storage provided by a -Russian internet company Mail.Ru Group (https://mail.ru). -The official desktop client is Disk-O: (https://disk-o.cloud/en), -available on Windows and Mac OS. -.PP -Currently it is recommended to disable 2FA on Mail.ru accounts intended -for rclone until it gets eventually implemented. -.SS Features highlights -.IP \[bu] 2 -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R] -.IP \[bu] 2 -Files have a \f[C]last modified time\f[R] property, directories -don\[aq]t -.IP \[bu] 2 -Deleted files are by default moved to the trash -.IP \[bu] 2 -Files and directories can be shared via public links -.IP \[bu] 2 -Partial uploads or streaming are not supported, file size must be known -before upload -.IP \[bu] 2 -Maximum file size is limited to 2G for a free account, unlimited for -paid accounts -.IP \[bu] 2 -Storage keeps hash for all files and performs transparent deduplication, -the hash algorithm is a modified SHA1 -.IP \[bu] 2 -If a particular file is already present in storage, one can quickly -submit file hash instead of long file upload (this optimization is -supported by rclone) -.SS Configuration -.PP -Here is an example of making a mailru configuration. -.PP -First create a Mail.ru Cloud account and choose a tariff. -.PP -You will need to log in and create an app password for rclone. -Rclone \f[B]will not work\f[R] with your normal username and password - -it will give an error like -\f[C]oauth2: server response missing access_token\f[R]. -.IP \[bu] 2 -Click on your user icon in the top right -.IP \[bu] 2 -Go to Security / \[dq]\[u041F]\[u0430]\[u0440]\[u043E]\[u043B]\[u044C] -\[u0438] -\[u0431]\[u0435]\[u0437]\[u043E]\[u043F]\[u0430]\[u0441]\[u043D]\[u043E]\[u0441]\[u0442]\[u044C]\[dq] -.IP \[bu] 2 -Click password for apps / -\[dq]\[u041F]\[u0430]\[u0440]\[u043E]\[u043B]\[u0438] -\[u0434]\[u043B]\[u044F] -\[u0432]\[u043D]\[u0435]\[u0448]\[u043D]\[u0438]\[u0445] -\[u043F]\[u0440]\[u0438]\[u043B]\[u043E]\[u0436]\[u0435]\[u043D]\[u0438]\[u0439]\[dq] -.IP \[bu] 2 -Add the password - give it a name - eg \[dq]rclone\[dq] -.IP \[bu] 2 -Copy the password and use this password below - your normal login -password won\[aq]t work. -.PP -Now run +y) Yes, type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: Edit advanced +config? +y) Yes n) No (default) y/n> n -------------------- [other] type = koofr +provider = other endpoint = https://koofr.other.org user = USERNAME +password = *** ENCRYPTED *** -------------------- y) Yes this is OK +(default) e) Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Mail.ru Cloud - \[rs] \[dq]mailru\[dq] -[snip] -Storage> mailru -User name (usually email) -Enter a string value. Press Enter for the default (\[dq]\[dq]). -user> username\[at]mail.ru -Password +# Mail.ru Cloud -This must be an app password - rclone will not work with your normal -password. See the Configuration section in the docs for how to make an -app password. -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Skip full upload if there is another file with same data hash. -This feature is called \[dq]speedup\[dq] or \[dq]put by hash\[dq]. It is especially efficient -in case of generally available files like popular books, video or audio clips -[snip] -Enter a boolean value (true or false). Press Enter for the default (\[dq]true\[dq]). -Choose a number from below, or type in your own value - 1 / Enable - \[rs] \[dq]true\[dq] - 2 / Disable - \[rs] \[dq]false\[dq] -speedup_enable> 1 -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config --------------------- -[remote] -type = mailru -user = username\[at]mail.ru -pass = *** ENCRYPTED *** -speedup_enable = true --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y +[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS. + +## Features highlights + +- Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga] +- Files have a \[ga]last modified time\[ga] property, directories don\[aq]t +- Deleted files are by default moved to the trash +- Files and directories can be shared via public links +- Partial uploads or streaming are not supported, file size must be known before upload +- Maximum file size is limited to 2G for a free account, unlimited for paid accounts +- Storage keeps hash for all files and performs transparent deduplication, + the hash algorithm is a modified SHA1 +- If a particular file is already present in storage, one can quickly submit file hash + instead of long file upload (this optimization is supported by rclone) + +## Configuration + +Here is an example of making a mailru configuration. + +First create a Mail.ru Cloud account and choose a tariff. + +You will need to log in and create an app password for rclone. Rclone +**will not work** with your normal username and password - it will +give an error like \[ga]oauth2: server response missing access_token\[ga]. + +- Click on your user icon in the top right +- Go to Security / \[dq]\[u041F]\[u0430]\[u0440]\[u043E]\[u043B]\[u044C] \[u0438] \[u0431]\[u0435]\[u0437]\[u043E]\[u043F]\[u0430]\[u0441]\[u043D]\[u043E]\[u0441]\[u0442]\[u044C]\[dq] +- Click password for apps / \[dq]\[u041F]\[u0430]\[u0440]\[u043E]\[u043B]\[u0438] \[u0434]\[u043B]\[u044F] \[u0432]\[u043D]\[u0435]\[u0448]\[u043D]\[u0438]\[u0445] \[u043F]\[u0440]\[u0438]\[u043B]\[u043E]\[u0436]\[u0435]\[u043D]\[u0438]\[u0439]\[dq] +- Add the password - give it a name - eg \[dq]rclone\[dq] +- Copy the password and use this password below - your normal login password won\[aq]t work. + +Now run + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -Configuration of this backend does not require a local web browser. -You can use the configured backend as shown below: -.PP -See top level directories -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -Make a new directory -.IP -.nf -\f[C] -rclone mkdir remote:directory -\f[R] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone ls remote:directory -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote path, deleting any -excess files in the path. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:directory -\f[R] -.fi -.SS Modified time -.PP -Files support a modification time attribute with up to 1 second -precision. -Directories do not have a modification time, which is shown as \[dq]Jan -1 1970\[dq]. -.SS Hash checksums -.PP -Hash sums use a custom Mail.ru algorithm based on SHA1. -If file size is less than or equal to the SHA1 block size (20 bytes), -its hash is simply its data right-padded with zero bytes. -Hash sum of a larger file is computed as a SHA1 sum of the file data -bytes concatenated with a decimal representation of the data length. -.SS Emptying Trash -.PP -Removing a file or directory actually moves it to the trash, which is -not visible to rclone but can be seen in a web browser. -The trashed file still occupies part of total quota. -If you wish to empty your trash and free some quota, you can use the -\f[C]rclone cleanup remote:\f[R] command, which will permanently delete -all your trashed files. -This command does not take any path arguments. -.SS Quota information -.PP -To view your current quota you can use the -\f[C]rclone about remote:\f[R] command which will display your usage -limit (quota) and the current usage. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -T{ -* -T}@T{ -0x2A -T}@T{ -\[uFF0A] -T} -T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] -T} -T{ -< -T}@T{ -0x3C -T}@T{ -\[uFF1C] -T} -T{ -> -T}@T{ -0x3E -T}@T{ -\[uFF1E] -T} -T{ -? -T}@T{ -0x3F -T}@T{ -\[uFF1F] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -T{ -| -T}@T{ -0x7C -T}@T{ -\[uFF5C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP -Here are the Standard options specific to mailru (Mail.ru Cloud). -.SS --mailru-user -.PP -User name (usually email). -.PP -Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --mailru-pass -.PP -Password. +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Mail.ru Cloud \ \[dq]mailru\[dq] [snip] Storage> mailru User name +(usually email) Enter a string value. +Press Enter for the default (\[dq]\[dq]). +user> username\[at]mail.ru Password .PP This must be an app password - rclone will not work with your normal password. See the Configuration section in the docs for how to make an app password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP -Properties: -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --mailru-speedup-enable -.PP -Skip full upload if there is another file with same data hash. -.PP +y) Yes type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: Skip full upload +if there is another file with same data hash. This feature is called \[dq]speedup\[dq] or \[dq]put by hash\[dq]. It is especially efficient in case of generally available files like -popular books, video or audio clips, because files are searched by hash -in all accounts of all mailru users. +popular books, video or audio clips [snip] Enter a boolean value (true +or false). +Press Enter for the default (\[dq]true\[dq]). +Choose a number from below, or type in your own value 1 / Enable +\ \[dq]true\[dq] 2 / Disable \ \[dq]false\[dq] speedup_enable> 1 Edit +advanced config? +(y/n) y) Yes n) No y/n> n Remote config -------------------- [remote] +type = mailru user = username\[at]mail.ru pass = *** ENCRYPTED *** +speedup_enable = true -------------------- y) Yes this is OK e) Edit +this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +Configuration of this backend does not require a local web browser. +You can use the configured backend as shown below: + +See top level directories + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:directory + +List the contents of a directory + + rclone ls remote:directory + +Sync \[ga]/home/local/directory\[ga] to the remote path, deleting any +excess files in the path. + + rclone sync --interactive /home/local/directory remote:directory + +### Modified time + +Files support a modification time attribute with up to 1 second precision. +Directories do not have a modification time, which is shown as \[dq]Jan 1 1970\[dq]. + +### Hash checksums + +Hash sums use a custom Mail.ru algorithm based on SHA1. +If file size is less than or equal to the SHA1 block size (20 bytes), +its hash is simply its data right-padded with zero bytes. +Hash sum of a larger file is computed as a SHA1 sum of the file data +bytes concatenated with a decimal representation of the data length. + +### Emptying Trash + +Removing a file or directory actually moves it to the trash, which is not +visible to rclone but can be seen in a web browser. The trashed file +still occupies part of total quota. If you wish to empty your trash +and free some quota, you can use the \[ga]rclone cleanup remote:\[ga] command, +which will permanently delete all your trashed files. +This command does not take any path arguments. + +### Quota information + +To view your current quota you can use the \[ga]rclone about remote:\[ga] +command which will display your usage limit (quota) and the current usage. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[dq] | 0x22 | \[uFF02] | +| * | 0x2A | \[uFF0A] | +| : | 0x3A | \[uFF1A] | +| < | 0x3C | \[uFF1C] | +| > | 0x3E | \[uFF1E] | +| ? | 0x3F | \[uFF1F] | +| \[rs] | 0x5C | \[uFF3C] | +| \[rs]| | 0x7C | \[uFF5C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to mailru (Mail.ru Cloud). + +#### --mailru-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_MAILRU_CLIENT_ID +- Type: string +- Required: false + +#### --mailru-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_MAILRU_CLIENT_SECRET +- Type: string +- Required: false + +#### --mailru-user + +User name (usually email). + +Properties: + +- Config: user +- Env Var: RCLONE_MAILRU_USER +- Type: string +- Required: true + +#### --mailru-pass + +Password. + +This must be an app password - rclone will not work with your normal +password. See the Configuration section in the docs for how to make an +app password. + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: pass +- Env Var: RCLONE_MAILRU_PASS +- Type: string +- Required: true + +#### --mailru-speedup-enable + +Skip full upload if there is another file with same data hash. + +This feature is called \[dq]speedup\[dq] or \[dq]put by hash\[dq]. It is especially efficient +in case of generally available files like popular books, video or audio clips, +because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. -Please note that rclone may need local memory and disk space to -calculate content hash in advance and decide whether full upload is -required. -Also, if rclone does not know file size in advance (e.g. -in case of streaming or partial uploads), it will not even try this -optimization. -.PP +Please note that rclone may need local memory and disk space to calculate +content hash in advance and decide whether full upload is required. +Also, if rclone does not know file size in advance (e.g. in case of +streaming or partial uploads), it will not even try this optimization. + Properties: -.IP \[bu] 2 -Config: speedup_enable -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Enable -.RE -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Disable -.RE -.RE -.SS Advanced options -.PP + +- Config: speedup_enable +- Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE +- Type: bool +- Default: true +- Examples: + - \[dq]true\[dq] + - Enable + - \[dq]false\[dq] + - Disable + +### Advanced options + Here are the Advanced options specific to mailru (Mail.ru Cloud). -.SS --mailru-speedup-file-patterns -.PP -Comma separated list of file name patterns eligible for speedup (put by -hash). -.PP -Patterns are case insensitive and can contain \[aq]*\[aq] or \[aq]?\[aq] -meta characters. -.PP + +#### --mailru-token + +OAuth Access Token as a JSON blob. + Properties: -.IP \[bu] 2 -Config: speedup_file_patterns -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: -\[dq]\f[I].mkv,\f[R].avi,\f[I].mp4,\f[R].mp3,\f[I].zip,\f[R].gz,\f[I].rar,\f[R].pdf\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -Empty list completely disables speedup (put by hash). -.RE -.IP \[bu] 2 -\[dq]*\[dq] -.RS 2 -.IP \[bu] 2 -All files will be attempted for speedup. -.RE -.IP \[bu] 2 -\[dq]\f[I].mkv,\f[R].avi,\f[I].mp4,\f[R].mp3\[dq] -.RS 2 -.IP \[bu] 2 -Only common audio/video files will be tried for put by hash. -.RE -.IP \[bu] 2 -\[dq]\f[I].zip,\f[R].gz,\f[I].rar,\f[R].pdf\[dq] -.RS 2 -.IP \[bu] 2 -Only common archives or PDF books will be tried for speedup. -.RE -.RE -.SS --mailru-speedup-max-disk -.PP + +- Config: token +- Env Var: RCLONE_MAILRU_TOKEN +- Type: string +- Required: false + +#### --mailru-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_MAILRU_AUTH_URL +- Type: string +- Required: false + +#### --mailru-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_MAILRU_TOKEN_URL +- Type: string +- Required: false + +#### --mailru-speedup-file-patterns + +Comma separated list of file name patterns eligible for speedup (put by hash). + +Patterns are case insensitive and can contain \[aq]*\[aq] or \[aq]?\[aq] meta characters. + +Properties: + +- Config: speedup_file_patterns +- Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS +- Type: string +- Default: \[dq]*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf\[dq] +- Examples: + - \[dq]\[dq] + - Empty list completely disables speedup (put by hash). + - \[dq]*\[dq] + - All files will be attempted for speedup. + - \[dq]*.mkv,*.avi,*.mp4,*.mp3\[dq] + - Only common audio/video files will be tried for put by hash. + - \[dq]*.zip,*.gz,*.rar,*.pdf\[dq] + - Only common archives or PDF books will be tried for speedup. + +#### --mailru-speedup-max-disk + This option allows you to disable speedup (put by hash) for large files. -.PP + Reason is that preliminary hashing can exhaust your RAM or disk space. -.PP + Properties: -.IP \[bu] 2 -Config: speedup_max_disk -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 3Gi -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]0\[dq] -.RS 2 -.IP \[bu] 2 -Completely disable speedup (put by hash). -.RE -.IP \[bu] 2 -\[dq]1G\[dq] -.RS 2 -.IP \[bu] 2 -Files larger than 1Gb will be uploaded directly. -.RE -.IP \[bu] 2 -\[dq]3G\[dq] -.RS 2 -.IP \[bu] 2 -Choose this option if you have less than 3Gb free on local disk. -.RE -.RE -.SS --mailru-speedup-max-memory -.PP + +- Config: speedup_max_disk +- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK +- Type: SizeSuffix +- Default: 3Gi +- Examples: + - \[dq]0\[dq] + - Completely disable speedup (put by hash). + - \[dq]1G\[dq] + - Files larger than 1Gb will be uploaded directly. + - \[dq]3G\[dq] + - Choose this option if you have less than 3Gb free on local disk. + +#### --mailru-speedup-max-memory + Files larger than the size given below will always be hashed on disk. -.PP + Properties: -.IP \[bu] 2 -Config: speedup_max_memory -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 32Mi -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]0\[dq] -.RS 2 -.IP \[bu] 2 -Preliminary hashing will always be done in a temporary disk location. -.RE -.IP \[bu] 2 -\[dq]32M\[dq] -.RS 2 -.IP \[bu] 2 -Do not dedicate more than 32Mb RAM for preliminary hashing. -.RE -.IP \[bu] 2 -\[dq]256M\[dq] -.RS 2 -.IP \[bu] 2 -You have at most 256Mb RAM free for hash calculations. -.RE -.RE -.SS --mailru-check-hash -.PP + +- Config: speedup_max_memory +- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY +- Type: SizeSuffix +- Default: 32Mi +- Examples: + - \[dq]0\[dq] + - Preliminary hashing will always be done in a temporary disk location. + - \[dq]32M\[dq] + - Do not dedicate more than 32Mb RAM for preliminary hashing. + - \[dq]256M\[dq] + - You have at most 256Mb RAM free for hash calculations. + +#### --mailru-check-hash + What should copy do if file checksum is mismatched or invalid. -.PP + Properties: -.IP \[bu] 2 -Config: check_hash -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_CHECK_HASH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Fail with error. -.RE -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Ignore and continue. -.RE -.RE -.SS --mailru-user-agent -.PP + +- Config: check_hash +- Env Var: RCLONE_MAILRU_CHECK_HASH +- Type: bool +- Default: true +- Examples: + - \[dq]true\[dq] + - Fail with error. + - \[dq]false\[dq] + - Ignore and continue. + +#### --mailru-user-agent + HTTP user agent used internally by client. -.PP -Defaults to \[dq]rclone/VERSION\[dq] or \[dq]--user-agent\[dq] provided -on command line. -.PP + +Defaults to \[dq]rclone/VERSION\[dq] or \[dq]--user-agent\[dq] provided on command line. + Properties: -.IP \[bu] 2 -Config: user_agent -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_USER_AGENT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --mailru-quirks -.PP + +- Config: user_agent +- Env Var: RCLONE_MAILRU_USER_AGENT +- Type: string +- Required: false + +#### --mailru-quirks + Comma separated list of internal maintenance flags. -.PP -This option must not be used by an ordinary user. -It is intended only to facilitate remote troubleshooting of backend -issues. -Strict meaning of flags is not documented and not guaranteed to persist -between releases. + +This option must not be used by an ordinary user. It is intended only to +facilitate remote troubleshooting of backend issues. Strict meaning of +flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs -.PP + Properties: -.IP \[bu] 2 -Config: quirks -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_QUIRKS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --mailru-encoding -.PP + +- Config: quirks +- Env Var: RCLONE_MAILRU_QUIRKS +- Type: string +- Required: false + +#### --mailru-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_MAILRU_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.PP -File size limits depend on your account. -A single file size is limited by 2G for a free account and unlimited for -paid tariffs. -Please refer to the Mail.ru site for the total uploaded size limits. -.PP + +- Config: encoding +- Env Var: RCLONE_MAILRU_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +File size limits depend on your account. A single file size is limited by 2G +for a free account and unlimited for paid tariffs. Please refer to the Mail.ru +site for the total uploaded size limits. + Note that Mailru is case insensitive so you can\[aq]t have a file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.SH Mega -.PP -Mega (https://mega.nz/) is a cloud storage and file hosting service + +# Mega + +[Mega](https://mega.nz/) is a cloud storage and file hosting service known for its security feature where all files are encrypted locally -before they are uploaded. -This prevents anyone (including employees of Mega) from accessing the -files without knowledge of the key used for encryption. -.PP +before they are uploaded. This prevents anyone (including employees of +Mega) from accessing the files without knowledge of the key used for +encryption. + This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Mega - \[rs] \[dq]mega\[dq] -[snip] -Storage> mega -User name -user> you\[at]example.com -Password. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank -y/g/n> y -Enter the password: -password: -Confirm the password: -password: -Remote config --------------------- -[remote] -type = mega -user = you\[at]example.com -pass = *** ENCRYPTED *** --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -\f[B]NOTE:\f[R] The encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the -credentials in \f[C]rclone\f[R] will fail. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Mega +\ \[dq]mega\[dq] [snip] Storage> mega User name user> +you\[at]example.com Password. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank y/g/n> y Enter the password: password: +Confirm the password: password: Remote config -------------------- +[remote] type = mega user = you\[at]example.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +**NOTE:** The encryption keys need to have been already generated after a regular login +via the browser, otherwise attempting to use the credentials in \[ga]rclone\[ga] will fail. + +Once configured you can then use \[ga]rclone\[ga] like this, + List directories in top level of your Mega -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your Mega -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to an Mega directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modified time and hashes -.PP + + rclone copy /home/source remote:backup + +### Modified time and hashes + Mega does not support modification times or hashes yet. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Duplicated files -.PP + +### Restricted filename characters + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| NUL | 0x00 | \[u2400] | +| / | 0x2F | \[uFF0F] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Duplicated files + Mega can have two files with exactly the same name and path (unlike a normal file system). -.PP + Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. -.PP -Use \f[C]rclone dedupe\f[R] to fix duplicated files. -.SS Failure to log-in -.SS Object not found -.PP -If you are connecting to your Mega remote for the first time, to test -access and synchronization, you may receive an error such as -.IP -.nf -\f[C] -Failed to create file system for \[dq]my-mega-remote:\[dq]: -couldn\[aq]t login: Object (typically, node or user) not found + +Use \[ga]rclone dedupe\[ga] to fix duplicated files. + +### Failure to log-in + +#### Object not found + +If you are connecting to your Mega remote for the first time, +to test access and synchronization, you may receive an error such as \f[R] .fi .PP -The diagnostic steps often recommended in the rclone -forum (https://forum.rclone.org/search?q=mega) start with the -\f[B]MEGAcmd\f[R] utility. -Note that this refers to the official C++ command from -https://github.com/meganz/MEGAcmd and not the go language built command -from t3rm1n4l/megacmd that is no longer maintained. -.PP -Follow the instructions for installing MEGAcmd and try accessing your -remote as they recommend. -You can establish whether or not you can log in using MEGAcmd, and -obtain diagnostic information to help you, and search or work with -others in the forum. +Failed to create file system for \[dq]my-mega-remote:\[dq]: couldn\[aq]t +login: Object (typically, node or user) not found .IP .nf \f[C] -MEGA CMD> login me\[at]example.com -Password: -Fetching nodes ... -Loading transfers from local cache -Login complete as me\[at]example.com -me\[at]example.com:/$ +The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega) +start with the **MEGAcmd** utility. Note that this refers to +the official C++ command from https://github.com/meganz/MEGAcmd +and not the go language built command from t3rm1n4l/megacmd +that is no longer maintained. + +Follow the instructions for installing MEGAcmd and try accessing +your remote as they recommend. You can establish whether or not +you can log in using MEGAcmd, and obtain diagnostic information +to help you, and search or work with others in the forum. \f[R] .fi .PP -Note that some have found issues with passwords containing special -characters. -If you can not log on with rclone, but MEGAcmd logs on just fine, then -consider changing your password temporarily to pure alphanumeric -characters, in case that helps. -.SS Repeated commands blocks access -.PP -Mega remotes seem to get blocked (reject logins) under \[dq]heavy -use\[dq]. +MEGA CMD> login me\[at]example.com Password: Fetching nodes ... +Loading transfers from local cache Login complete as me\[at]example.com +me\[at]example.com:/$ +.IP +.nf +\f[C] +Note that some have found issues with passwords containing special +characters. If you can not log on with rclone, but MEGAcmd logs on +just fine, then consider changing your password temporarily to +pure alphanumeric characters, in case that helps. + + +#### Repeated commands blocks access + +Mega remotes seem to get blocked (reject logins) under \[dq]heavy use\[dq]. We haven\[aq]t worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. -.PP -For example, executing this command 90 times in a row -\f[C]rclone link remote:file\f[R] will cause the remote to become -\[dq]blocked\[dq]. -This is not an abnormal situation, for example if you wish to get the -public links of a directory with hundred of files... -After more or less a week, the remote will remote accept rclone logins -normally again. -.PP -You can mitigate this issue by mounting the remote it with -\f[C]rclone mount\f[R]. -This will log-in when mounting and a log-out when unmounting only. -You can also run \f[C]rclone rcd\f[R] and then use \f[C]rclone rc\f[R] -to run the commands over the API to avoid logging in each time. -.PP + +For example, executing this command 90 times in a row \[ga]rclone link +remote:file\[ga] will cause the remote to become \[dq]blocked\[dq]. This is not an +abnormal situation, for example if you wish to get the public links of +a directory with hundred of files... After more or less a week, the +remote will remote accept rclone logins normally again. + +You can mitigate this issue by mounting the remote it with \[ga]rclone +mount\[ga]. This will log-in when mounting and a log-out when unmounting +only. You can also run \[ga]rclone rcd\[ga] and then use \[ga]rclone rc\[ga] to run +the commands over the API to avoid logging in each time. + Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue. -.PP + If you space rclone commands by 3 seconds it will avoid blocking the -remote. -We haven\[aq]t identified the exact blocking rules, so perhaps one could -execute the command 80 times without waiting and avoid blocking by -waiting 3 seconds, then continuing... -.PP -Note that this has been observed by trial and error and might not be set -in stone. -.PP +remote. We haven\[aq]t identified the exact blocking rules, so perhaps one +could execute the command 80 times without waiting and avoid blocking +by waiting 3 seconds, then continuing... + +Note that this has been observed by trial and error and might not be +set in stone. + Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn\[aq]t compatible with the current stateless rclone approach. -.PP -Note that once blocked, the use of other tools (such as megacmd) is not -a sure workaround: following megacmd login times have been observed in -succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. -Web access looks unaffected though. -.PP + +Note that once blocked, the use of other tools (such as megacmd) is +not a sure workaround: following megacmd login times have been +observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 +min, 30min. Web access looks unaffected though. + Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. -.PP + So, if rclone was working nicely and suddenly you are unable to log-in -and you are sure the user and the password are correct, likely you have -got the remote blocked for a while. -.SS Standard options -.PP +and you are sure the user and the password are correct, likely you +have got the remote blocked for a while. + + +### Standard options + Here are the Standard options specific to mega (Mega). -.SS --mega-user -.PP + +#### --mega-user + User name. -.PP + Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_MEGA_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --mega-pass -.PP + +- Config: user +- Env Var: RCLONE_MEGA_USER +- Type: string +- Required: true + +#### --mega-pass + Password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_MEGA_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP + +- Config: pass +- Env Var: RCLONE_MEGA_PASS +- Type: string +- Required: true + +### Advanced options + Here are the Advanced options specific to mega (Mega). -.SS --mega-debug -.PP + +#### --mega-debug + Output more debug from Mega. -.PP + If this flag is set (along with -vv) it will print further debugging information from the mega backend. -.PP + Properties: -.IP \[bu] 2 -Config: debug -.IP \[bu] 2 -Env Var: RCLONE_MEGA_DEBUG -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --mega-hard-delete -.PP + +- Config: debug +- Env Var: RCLONE_MEGA_DEBUG +- Type: bool +- Default: false + +#### --mega-hard-delete + Delete files permanently rather than putting them into the trash. -.PP + Normally the mega backend will put all deletions into the trash rather -than permanently deleting them. -If you specify this then rclone will permanently delete objects instead. -.PP +than permanently deleting them. If you specify this then rclone will +permanently delete objects instead. + Properties: -.IP \[bu] 2 -Config: hard_delete -.IP \[bu] 2 -Env Var: RCLONE_MEGA_HARD_DELETE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --mega-use-https -.PP + +- Config: hard_delete +- Env Var: RCLONE_MEGA_HARD_DELETE +- Type: bool +- Default: false + +#### --mega-use-https + Use HTTPS for transfers. -.PP + MEGA uses plain text HTTP connections by default. -Some ISPs throttle HTTP connections, this causes transfers to become -very slow. +Some ISPs throttle HTTP connections, this causes transfers to become very slow. Enabling this will force MEGA to use HTTPS for all transfers. -HTTPS is normally not necessary since all data is already encrypted -anyway. +HTTPS is normally not necessary since all data is already encrypted anyway. Enabling it will increase CPU usage and add network overhead. -.PP + Properties: -.IP \[bu] 2 -Config: use_https -.IP \[bu] 2 -Env Var: RCLONE_MEGA_USE_HTTPS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --mega-encoding -.PP + +- Config: use_https +- Env Var: RCLONE_MEGA_USE_HTTPS +- Type: bool +- Default: false + +#### --mega-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_MEGA_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,InvalidUtf8,Dot -.SS Limitations -.PP -This backend uses the go-mega go -library (https://github.com/t3rm1n4l/go-mega) which is an opensource go -library implementing the Mega API. -There doesn\[aq]t appear to be any documentation for the mega protocol -beyond the mega C++ SDK (https://github.com/meganz/sdk) source code so -there are likely quite a few errors still remaining in this library. -.PP + +- Config: encoding +- Env Var: RCLONE_MEGA_ENCODING +- Type: MultiEncoder +- Default: Slash,InvalidUtf8,Dot + + + +### Process \[ga]killed\[ga] + +On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). + +## Limitations + +This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource +go library implementing the Mega API. There doesn\[aq]t appear to be any +documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code +so there are likely quite a few errors still remaining in this library. + Mega allows duplicate files which may confuse rclone. -.SH Memory -.PP -The memory backend is an in RAM backend. -It does not persist its data - use the local backend for that. -.PP -The memory backend behaves like a bucket-based remote (e.g. -like s3). -Because it has no parameters you can just use it with the -\f[C]:memory:\f[R] remote name. -.SS Configuration -.PP -You can configure it as a remote like this with \f[C]rclone config\f[R] -too if you want to: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Memory - \[rs] \[dq]memory\[dq] -[snip] -Storage> memory -** See help for memory backend at: https://rclone.org/memory/ ** -Remote config +# Memory --------------------- -[remote] -type = memory --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y +The memory backend is an in RAM backend. It does not persist its +data - use the local backend for that. + +The memory backend behaves like a bucket-based remote (e.g. like +s3). Because it has no parameters you can just use it with the +\[ga]:memory:\[ga] remote name. + +## Configuration + +You can configure it as a remote like this with \[ga]rclone config\[ga] too if +you want to: \f[R] .fi .PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / Memory +\ \[dq]memory\[dq] [snip] Storage> memory ** See help for memory backend +at: https://rclone.org/memory/ ** +.PP +Remote config +.PP +.TS +tab(@); +l. +T{ +[remote] +T} +T{ +type = memory +T} +.TE +.IP "y)" 3 +Yes this is OK (default) +.IP "z)" 3 +Edit this remote +.IP "a)" 3 +Delete this remote y/e/d> y +.IP +.nf +\f[C] Because the memory backend isn\[aq]t persistent it is most useful for testing or with an rclone server or rclone mount, e.g. -.IP -.nf -\f[C] -rclone mount :memory: /mnt/tmp -rclone serve webdav :memory: -rclone serve sftp :memory: + + rclone mount :memory: /mnt/tmp + rclone serve webdav :memory: + rclone serve sftp :memory: + +### Modified time and hashes + +The memory backend supports MD5 hashes and modification times accurate to 1 nS. + +### Restricted filename characters + +The memory backend replaces the [default restricted characters +set](https://rclone.org/overview/#restricted-characters). + + + + +# Akamai NetStorage + +Paths are specified as \[ga]remote:\[ga] +You may put subdirectories in too, e.g. \[ga]remote:/path/to/dir\[ga]. +If you have a CP code you can use that as the folder after the domain such as \[rs]\[rs]/\[rs]\[rs]/\[rs]. + +For example, this is commonly configured with or without a CP code: +* **With a CP code**. \[ga][your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/\[ga] +* **Without a CP code**. \[ga][your-domain-prefix]-nsu.akamaihd.net\[ga] + + +See all buckets + rclone lsd remote: +The initial setup for Netstorage involves getting an account and secret. Use \[ga]rclone config\[ga] to walk you through the setup process. + +## Configuration + +Here\[aq]s an example of how to make a remote called \[ga]ns1\[ga]. + +1. To begin the interactive configuration process, enter this command: \f[R] .fi -.SS Modified time and hashes .PP -The memory backend supports MD5 hashes and modification times accurate -to 1 nS. -.SS Restricted filename characters -.PP -The memory backend replaces the default restricted characters -set (https://rclone.org/overview/#restricted-characters). -.SH Akamai NetStorage -.PP -Paths are specified as \f[C]remote:\f[R] You may put subdirectories in -too, e.g. -\f[C]remote:/path/to/dir\f[R]. -If you have a CP code you can use that as the folder after the domain -such as //. -.PP -For example, this is commonly configured with or without a CP code: * -\f[B]With a CP code\f[R]. -\f[C][your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/\f[R] * -\f[B]Without a CP code\f[R]. -\f[C][your-domain-prefix]-nsu.akamaihd.net\f[R] -.PP -See all buckets rclone lsd remote: The initial setup for Netstorage -involves getting an account and secret. -Use \f[C]rclone config\f[R] to walk you through the setup process. -.SS Configuration -.PP -Here\[aq]s an example of how to make a remote called \f[C]ns1\f[R]. -.IP "1." 3 -To begin the interactive configuration process, enter this command: -.IP -.nf -\f[C] rclone config -\f[R] -.fi -.IP "2." 3 -Type \f[C]n\f[R] to create a new remote. .IP .nf \f[C] -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n +2. Type \[ga]n\[ga] to create a new remote. \f[R] .fi -.IP "3." 3 -For this example, enter \f[C]ns1\f[R] when you reach the name> prompt. +.IP "n)" 3 +New remote +.IP "o)" 3 +Delete remote +.IP "p)" 3 +Quit config e/n/d/q> n .IP .nf \f[C] +3. For this example, enter \[ga]ns1\[ga] when you reach the name> prompt. +\f[R] +.fi +.PP name> ns1 -\f[R] -.fi -.IP "4." 3 -Enter \f[C]netstorage\f[R] as the type of storage to configure. .IP .nf \f[C] +4. Enter \[ga]netstorage\[ga] as the type of storage to configure. +\f[R] +.fi +.PP Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -XX / NetStorage - \[rs] \[dq]netstorage\[dq] -Storage> netstorage -\f[R] -.fi -.IP "5." 3 -Select between the HTTP or HTTPS protocol. -Most users should choose HTTPS, which is the default. -HTTP is provided primarily for debugging purposes. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value XX / NetStorage +\ \[dq]netstorage\[dq] Storage> netstorage .IP .nf \f[C] -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / HTTP protocol - \[rs] \[dq]http\[dq] - 2 / HTTPS protocol - \[rs] \[dq]https\[dq] -protocol> 1 +5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. + \f[R] .fi -.IP "6." 3 -Specify your NetStorage host, CP code, and any necessary content paths -using this format: \f[C]///\f[R] +.PP +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / HTTP protocol +\ \[dq]http\[dq] 2 / HTTPS protocol \ \[dq]https\[dq] protocol> 1 .IP .nf \f[C] -Enter a string value. Press Enter for the default (\[dq]\[dq]). +6. Specify your NetStorage host, CP code, and any necessary content paths using this format: \[ga]///\[ga] +\f[R] +.fi +.PP +Enter a string value. +Press Enter for the default (\[dq]\[dq]). host> baseball-nsu.akamaihd.net/123456/content/ -\f[R] -.fi -.IP "7." 3 -Set the netstorage account name .IP .nf \f[C] -Enter a string value. Press Enter for the default (\[dq]\[dq]). +7. Set the netstorage account name +\f[R] +.fi +.PP +Enter a string value. +Press Enter for the default (\[dq]\[dq]). account> username -\f[R] -.fi -.IP "8." 3 -Set the Netstorage account secret/G2O key which will be used for -authentication purposes. -Select the \f[C]y\f[R] option to set your own password then enter your -secret. -Note: The secret is stored in the \f[C]rclone.conf\f[R] file with -hex-encoded encryption. .IP .nf \f[C] -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: +8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the \[ga]y\[ga] option to set your own password then enter your secret. +Note: The secret is stored in the \[ga]rclone.conf\[ga] file with hex-encoded encryption. \f[R] .fi -.IP "9." 3 -View the summary and confirm your remote configuration. +.IP "y)" 3 +Yes type in my own password +.IP "z)" 3 +Generate random password y/g> y Enter the password: password: Confirm +the password: password: .IP .nf \f[C] -[ns1] -type = netstorage -protocol = http -host = baseball-nsu.akamaihd.net/123456/content/ -account = username -secret = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y +9. View the summary and confirm your remote configuration. \f[R] .fi .PP -This remote is called \f[C]ns1\f[R] and can now be used. -.SS Example operations -.PP -Get started with rclone and NetStorage with these examples. -For additional rclone commands, visit https://rclone.org/commands/. -.SS See contents of a directory in your project +[ns1] type = netstorage protocol = http host = +baseball-nsu.akamaihd.net/123456/content/ account = username secret = +*** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) +Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -rclone lsd ns1:/974012/testing/ -\f[R] -.fi -.SS Sync the contents local with remote -.IP -.nf -\f[C] -rclone sync . ns1:/974012/testing/ -\f[R] -.fi -.SS Upload local content to remote -.IP -.nf -\f[C] -rclone copy notes.txt ns1:/974012/testing/ -\f[R] -.fi -.SS Delete content on remote -.IP -.nf -\f[C] -rclone delete ns1:/974012/testing/notes.txt -\f[R] -.fi -.SS Move or copy content between CP codes. -.PP -Your credentials must have access to two CP codes on the same remote. -You can\[aq]t perform operations between different remotes. -.IP -.nf -\f[C] -rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ -\f[R] -.fi -.SS Features -.SS Symlink Support -.PP -The Netstorage backend changes the rclone \f[C]--links, -l\f[R] -behavior. -When uploading, instead of creating the .rclonelink file, use the -\[dq]symlink\[dq] API in order to create the corresponding symlink on -the remote. -The .rclonelink file will not be created, the upload will be intercepted -and only the symlink file that matches the source file name with no -suffix will be created on the remote. -.PP -This will effectively allow commands like copy/copyto, move/moveto and -sync to upload from local to remote and download from remote to local -directories with symlinks. -Due to internal rclone limitations, it is not possible to upload an -individual symlink file to any remote backend. -You can always use the \[dq]backend symlink\[dq] command to create a -symlink on the NetStorage server, refer to \[dq]symlink\[dq] section -below. -.PP -Individual symlink files on the remote can be used with the commands -like \[dq]cat\[dq] to print the destination name, or \[dq]delete\[dq] to -delete symlink, or copy, copy/to and move/moveto to download from the -remote to local. -Note: individual symlink files on the remote should be specified -including the suffix .rclonelink. -.PP -\f[B]Note\f[R]: No file with the suffix .rclonelink should ever exist on -the server since it is not possible to actually upload/create a file -with .rclonelink suffix with rclone, it can only exist if it is manually -created through a non-rclone method on the remote. -.SS Implicit vs. Explicit Directories -.PP +This remote is called \[ga]ns1\[ga] and can now be used. + +## Example operations + +Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/. + +### See contents of a directory in your project + + rclone lsd ns1:/974012/testing/ + +### Sync the contents local with remote + + rclone sync . ns1:/974012/testing/ + +### Upload local content to remote + rclone copy notes.txt ns1:/974012/testing/ + +### Delete content on remote + rclone delete ns1:/974012/testing/notes.txt + +### Move or copy content between CP codes. + +Your credentials must have access to two CP codes on the same remote. You can\[aq]t perform operations between different remotes. + + rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ + +## Features + +### Symlink Support + +The Netstorage backend changes the rclone \[ga]--links, -l\[ga] behavior. When uploading, instead of creating the .rclonelink file, use the \[dq]symlink\[dq] API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote. + +This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the \[dq]backend symlink\[dq] command to create a symlink on the NetStorage server, refer to \[dq]symlink\[dq] section below. + +Individual symlink files on the remote can be used with the commands like \[dq]cat\[dq] to print the destination name, or \[dq]delete\[dq] to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink. + +**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote. + +### Implicit vs. Explicit Directories + With NetStorage, directories can exist in one of two forms: -.IP "1." 3 -\f[B]Explicit Directory\f[R]. -This is an actual, physical directory that you have created in a storage -group. -.IP "2." 3 -\f[B]Implicit Directory\f[R]. -This refers to a directory within a path that has not been physically -created. -For example, during upload of a file, nonexistent subdirectories can be -specified in the target path. -NetStorage creates these as \[dq]implicit.\[dq] While the directories -aren\[aq]t physically created, they exist implicitly and the noted path -is connected with the uploaded file. -.PP -Rclone will intercept all file uploads and mkdir commands for the -NetStorage remote and will explicitly issue the mkdir command for each -directory in the uploading path. -This will help with the interoperability with the other Akamai services -such as SFTP and the Content Management Shell (CMShell). -Rclone will not guarantee correctness of operations with implicit -directories which might have been created as a result of using an upload -API directly. -.SS \f[C]--fast-list\f[R] / ListR support -.PP -NetStorage remote supports the ListR feature by using the \[dq]list\[dq] -NetStorage API action to return a lexicographical list of all objects -within the specified CP code, recursing into subdirectories as -they\[aq]re encountered. -.IP \[bu] 2 -\f[B]Rclone will use the ListR method for some commands by default\f[R]. -Commands such as \f[C]lsf -R\f[R] will use ListR by default. -To disable this, include the \f[C]--disable listR\f[R] option to use the -non-recursive method of listing objects. -.IP \[bu] 2 -\f[B]Rclone will not use the ListR method for some commands\f[R]. -Commands such as \f[C]sync\f[R] don\[aq]t use ListR by default. -To force using the ListR method, include the \f[C]--fast-list\f[R] -option. -.PP -There are pros and cons of using the ListR method, refer to rclone -documentation (https://rclone.org/docs/#fast-list). -In general, the sync command over an existing deep tree on the remote -will run faster with the \[dq]--fast-list\[dq] flag but with extra -memory usage as a side effect. -It might also result in higher CPU utilization but the whole task can be -completed faster. -.PP -\f[B]Note\f[R]: There is a known limitation that \[dq]lsf -R\[dq] will -display number of files in the directory and directory size as -1 when -ListR method is used. -The workaround is to pass \[dq]--disable listR\[dq] flag if these -numbers are important in the output. -.SS Purge -.PP -NetStorage remote supports the purge feature by using the -\[dq]quick-delete\[dq] NetStorage API action. -The quick-delete action is disabled by default for security reasons and -can be enabled for the account through the Akamai portal. -Rclone will first try to use quick-delete action for the purge command -and if this functionality is disabled then will fall back to a standard -delete method. -.PP -\f[B]Note\f[R]: Read the NetStorage Usage -API (https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) -for considerations when using \[dq]quick-delete\[dq]. -In general, using quick-delete method will not delete the tree -immediately and objects targeted for quick-delete may still be -accessible. -.SS Standard options -.PP -Here are the Standard options specific to netstorage (Akamai -NetStorage). -.SS --netstorage-host -.PP + +1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group. +2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as \[dq]implicit.\[dq] While the directories aren\[aq]t physically created, they exist implicitly and the noted path is connected with the uploaded file. + +Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. + +### \[ga]--fast-list\[ga] / ListR support + +NetStorage remote supports the ListR feature by using the \[dq]list\[dq] NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they\[aq]re encountered. + +* **Rclone will use the ListR method for some commands by default**. Commands such as \[ga]lsf -R\[ga] will use ListR by default. To disable this, include the \[ga]--disable listR\[ga] option to use the non-recursive method of listing objects. + +* **Rclone will not use the ListR method for some commands**. Commands such as \[ga]sync\[ga] don\[aq]t use ListR by default. To force using the ListR method, include the \[ga]--fast-list\[ga] option. + +There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the \[dq]--fast-list\[dq] flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster. + +**Note**: There is a known limitation that \[dq]lsf -R\[dq] will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass \[dq]--disable listR\[dq] flag if these numbers are important in the output. + +### Purge + +NetStorage remote supports the purge feature by using the \[dq]quick-delete\[dq] NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method. + +**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using \[dq]quick-delete\[dq]. In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible. + + +### Standard options + +Here are the Standard options specific to netstorage (Akamai NetStorage). + +#### --netstorage-host + Domain+path of NetStorage host to connect to. -.PP -Format should be \f[C]/\f[R] -.PP + +Format should be \[ga]/\[ga] + Properties: -.IP \[bu] 2 -Config: host -.IP \[bu] 2 -Env Var: RCLONE_NETSTORAGE_HOST -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --netstorage-account -.PP + +- Config: host +- Env Var: RCLONE_NETSTORAGE_HOST +- Type: string +- Required: true + +#### --netstorage-account + Set the NetStorage account name -.PP + Properties: -.IP \[bu] 2 -Config: account -.IP \[bu] 2 -Env Var: RCLONE_NETSTORAGE_ACCOUNT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --netstorage-secret -.PP + +- Config: account +- Env Var: RCLONE_NETSTORAGE_ACCOUNT +- Type: string +- Required: true + +#### --netstorage-secret + Set the NetStorage account secret/G2O key for authentication. -.PP -Please choose the \[aq]y\[aq] option to set your own password then enter -your secret. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +Please choose the \[aq]y\[aq] option to set your own password then enter your secret. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: secret -.IP \[bu] 2 -Env Var: RCLONE_NETSTORAGE_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP -Here are the Advanced options specific to netstorage (Akamai -NetStorage). -.SS --netstorage-protocol -.PP + +- Config: secret +- Env Var: RCLONE_NETSTORAGE_SECRET +- Type: string +- Required: true + +### Advanced options + +Here are the Advanced options specific to netstorage (Akamai NetStorage). + +#### --netstorage-protocol + Select between HTTP or HTTPS protocol. -.PP + Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. -.PP + Properties: -.IP \[bu] 2 -Config: protocol -.IP \[bu] 2 -Env Var: RCLONE_NETSTORAGE_PROTOCOL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]https\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]http\[dq] -.RS 2 -.IP \[bu] 2 -HTTP protocol -.RE -.IP \[bu] 2 -\[dq]https\[dq] -.RS 2 -.IP \[bu] 2 -HTTPS protocol -.RE -.RE -.SS Backend commands -.PP + +- Config: protocol +- Env Var: RCLONE_NETSTORAGE_PROTOCOL +- Type: string +- Default: \[dq]https\[dq] +- Examples: + - \[dq]http\[dq] + - HTTP protocol + - \[dq]https\[dq] + - HTTPS protocol + +## Backend commands + Here are the commands specific to the netstorage backend. -.PP + Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP + + rclone backend COMMAND remote: + The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS du -.PP +[backend/command](https://rclone.org/rc/#backend-command). + +### du + Return disk usage information for a specified directory -.IP -.nf -\f[C] -rclone backend du remote: [options] [+] -\f[R] -.fi -.PP -The usage information returned, includes the targeted directory as well -as all files stored in any sub-directories that may exist. -.SS symlink -.PP + + rclone backend du remote: [options] [+] + +The usage information returned, includes the targeted directory as well as all +files stored in any sub-directories that may exist. + +### symlink + You can create a symbolic link in ObjectStore with the symlink action. -.IP -.nf -\f[C] -rclone backend symlink remote: [options] [+] -\f[R] -.fi -.PP -The desired path location (including applicable sub-directories) ending -in the object that will be the target of the symlink (for example, -/links/mylink). + + rclone backend symlink remote: [options] [+] + +The desired path location (including applicable sub-directories) ending in +the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. -\f[C]rclone backend symlink \f[R] -.SH Microsoft Azure Blob Storage -.PP -Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R] -for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:container/path/to/dir\f[R]. -.SS Configuration -.PP +\[ga]rclone backend symlink \[ga] + + + +# Microsoft Azure Blob Storage + +Paths are specified as \[ga]remote:container\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. +\[ga]remote:container/path/to/dir\[ga]. + +## Configuration + Here is an example of making a Microsoft Azure Blob Storage -configuration. -For a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +configuration. For a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Microsoft Azure Blob Storage - \[rs] \[dq]azureblob\[dq] -[snip] -Storage> azureblob -Storage Account Name -account> account_name -Storage Account Key -key> base64encodedkey== -Endpoint for the service - leave blank normally. -endpoint> -Remote config --------------------- -[remote] -account = account_name -key = base64encodedkey== -endpoint = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +Microsoft Azure Blob Storage \ \[dq]azureblob\[dq] [snip] Storage> +azureblob Storage Account Name account> account_name Storage Account Key +key> base64encodedkey== Endpoint for the service - leave blank normally. +endpoint> Remote config -------------------- [remote] account = +account_name key = base64encodedkey== endpoint = -------------------- y) +Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] See all containers -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + Make a new container -.IP -.nf -\f[C] -rclone mkdir remote:container -\f[R] -.fi -.PP + + rclone mkdir remote:container + List the contents of a container -.IP -.nf -\f[C] -rclone ls remote:container -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote container, deleting -any excess files in the container. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:container -\f[R] -.fi -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.SS Modified time -.PP -The modified time is stored as metadata on the object with the -\f[C]mtime\f[R] key. -It is stored using RFC3339 Format time with nanosecond precision. -The metadata is supplied during directory listings so there is no -performance overhead to using it. -.PP -If you wish to use the Azure standard \f[C]LastModified\f[R] time stored -on the object as the modified time, then use the -\f[C]--use-server-modtime\f[R] flag. -Note that rclone can\[aq]t set \f[C]LastModified\f[R], so using the -\f[C]--update\f[R] flag when syncing is recommended if using -\f[C]--use-server-modtime\f[R]. -.SS Performance -.PP + + rclone ls remote:container + +Sync \[ga]/home/local/directory\[ga] to the remote container, deleting any excess +files in the container. + + rclone sync --interactive /home/local/directory remote:container + +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +### Modified time + +The modified time is stored as metadata on the object with the \[ga]mtime\[ga] +key. It is stored using RFC3339 Format time with nanosecond +precision. The metadata is supplied during directory listings so +there is no performance overhead to using it. + +If you wish to use the Azure standard \[ga]LastModified\[ga] time stored on +the object as the modified time, then use the \[ga]--use-server-modtime\[ga] +flag. Note that rclone can\[aq]t set \[ga]LastModified\[ga], so using the +\[ga]--update\[ga] flag when syncing is recommended if using +\[ga]--use-server-modtime\[ga]. + +### Performance + When uploading large files, increasing the value of -\f[C]--azureblob-upload-concurrency\f[R] will increase performance at -the cost of using more memory. -The default of 16 is set quite conservatively to use less memory. -It maybe be necessary raise it to 64 or higher to fully utilize a 1 -GBit/s link with a single file transfer. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP +\[ga]--azureblob-upload-concurrency\[ga] will increase performance at the cost +of using more memory. The default of 16 is set quite conservatively to +use less memory. It maybe be necessary raise it to 64 or higher to +fully utilize a 1 GBit/s link with a single file transfer. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| / | 0x2F | \[uFF0F] | +| \[rs] | 0x5C | \[uFF3C] | + File names can also not end with the following characters. These only get replaced if they are the last character in the name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\&. -T}@T{ -0x2E -T}@T{ -\[uFF0E] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Hashes -.PP -MD5 hashes are stored with blobs. -However blobs that were uploaded in chunks only have an MD5 if the -source remote was capable of MD5 hashes, e.g. -the local disk. -.SS Authentication -.PP + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| . | 0x2E | \[uFF0E] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Hashes + +MD5 hashes are stored with blobs. However blobs that were uploaded in +chunks only have an MD5 if the source remote was capable of MD5 +hashes, e.g. the local disk. + +### Authentication {#authentication} + There are a number of ways of supplying credentials for Azure Blob -Storage. -Rclone tries them in the order of the sections below. -.SS Env Auth -.PP -If the \f[C]env_auth\f[R] config parameter is \f[C]true\f[R] then rclone -will pull credentials from the environment or runtime. -.PP +Storage. Rclone tries them in the order of the sections below. + +#### Env Auth + +If the \[ga]env_auth\[ga] config parameter is \[ga]true\[ga] then rclone will pull +credentials from the environment or runtime. + It tries these authentication methods in this order: -.IP "1." 3 -Environment Variables -.IP "2." 3 -Managed Service Identity Credentials -.IP "3." 3 -Azure CLI credentials (as used by the az tool) -.PP + +1. Environment Variables +2. Managed Service Identity Credentials +3. Azure CLI credentials (as used by the az tool) + These are described in the following sections -.SS Env Auth: 1. Environment Variables -.PP -If \f[C]env_auth\f[R] is set and environment variables are present -rclone authenticates a service principal with a secret or certificate, -or a user with a password, depending on which environment variable are -set. + +##### Env Auth: 1. Environment Variables + +If \[ga]env_auth\[ga] is set and environment variables are present rclone +authenticates a service principal with a secret or certificate, or a +user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order: -.IP "1." 3 -Service principal with client secret -.RS 4 -.IP \[bu] 2 -\f[C]AZURE_TENANT_ID\f[R]: ID of the service principal\[aq]s tenant. -Also called its \[dq]directory\[dq] ID. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_ID\f[R]: the service principal\[aq]s client ID -.IP \[bu] 2 -\f[C]AZURE_CLIENT_SECRET\f[R]: one of the service principal\[aq]s client -secrets -.RE -.IP "2." 3 -Service principal with certificate -.RS 4 -.IP \[bu] 2 -\f[C]AZURE_TENANT_ID\f[R]: ID of the service principal\[aq]s tenant. -Also called its \[dq]directory\[dq] ID. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_ID\f[R]: the service principal\[aq]s client ID -.IP \[bu] 2 -\f[C]AZURE_CLIENT_CERTIFICATE_PATH\f[R]: path to a PEM or PKCS12 -certificate file including the private key. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_CERTIFICATE_PASSWORD\f[R]: (optional) password for the -certificate file. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_SEND_CERTIFICATE_CHAIN\f[R]: (optional) Specifies -whether an authentication request will include an x5c header to support -subject name / issuer based authentication. -When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests -include the x5c header. -.RE -.IP "3." 3 -User with username and password -.RS 4 -.IP \[bu] 2 -\f[C]AZURE_TENANT_ID\f[R]: (optional) tenant to authenticate in. -Defaults to \[dq]organizations\[dq]. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_ID\f[R]: client ID of the application the user will -authenticate to -.IP \[bu] 2 -\f[C]AZURE_USERNAME\f[R]: a username (usually an email address) -.IP \[bu] 2 -\f[C]AZURE_PASSWORD\f[R]: the user\[aq]s password -.RE -.IP "4." 3 -Workload Identity -.RS 4 -.IP \[bu] 2 -\f[C]AZURE_TENANT_ID\f[R]: Tenant to authenticate in. -.IP \[bu] 2 -\f[C]AZURE_CLIENT_ID\f[R]: Client ID of the application the user will -authenticate to. -.IP \[bu] 2 -\f[C]AZURE_FEDERATED_TOKEN_FILE\f[R]: Path to projected service account -token file. -.IP \[bu] 2 -\f[C]AZURE_AUTHORITY_HOST\f[R]: Authority of an Azure Active Directory -endpoint (default: login.microsoftonline.com). -.RE -.SS Env Auth: 2. Managed Service Identity Credentials -.PP -When using Managed Service Identity if the VM(SS) on which this program -is running has a system-assigned identity, it will be used by default. -If the resource has no system-assigned but exactly one user-assigned -identity, the user-assigned identity will be used by default. -.PP + +1. Service principal with client secret + - \[ga]AZURE_TENANT_ID\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID. + - \[ga]AZURE_CLIENT_ID\[ga]: the service principal\[aq]s client ID + - \[ga]AZURE_CLIENT_SECRET\[ga]: one of the service principal\[aq]s client secrets +2. Service principal with certificate + - \[ga]AZURE_TENANT_ID\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID. + - \[ga]AZURE_CLIENT_ID\[ga]: the service principal\[aq]s client ID + - \[ga]AZURE_CLIENT_CERTIFICATE_PATH\[ga]: path to a PEM or PKCS12 certificate file including the private key. + - \[ga]AZURE_CLIENT_CERTIFICATE_PASSWORD\[ga]: (optional) password for the certificate file. + - \[ga]AZURE_CLIENT_SEND_CERTIFICATE_CHAIN\[ga]: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests include the x5c header. +3. User with username and password + - \[ga]AZURE_TENANT_ID\[ga]: (optional) tenant to authenticate in. Defaults to \[dq]organizations\[dq]. + - \[ga]AZURE_CLIENT_ID\[ga]: client ID of the application the user will authenticate to + - \[ga]AZURE_USERNAME\[ga]: a username (usually an email address) + - \[ga]AZURE_PASSWORD\[ga]: the user\[aq]s password +4. Workload Identity + - \[ga]AZURE_TENANT_ID\[ga]: Tenant to authenticate in. + - \[ga]AZURE_CLIENT_ID\[ga]: Client ID of the application the user will authenticate to. + - \[ga]AZURE_FEDERATED_TOKEN_FILE\[ga]: Path to projected service account token file. + - \[ga]AZURE_AUTHORITY_HOST\[ga]: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). + + +##### Env Auth: 2. Managed Service Identity Credentials + +When using Managed Service Identity if the VM(SS) on which this +program is running has a system-assigned identity, it will be used by +default. If the resource has no system-assigned but exactly one +user-assigned identity, the user-assigned identity will be used by +default. + If the resource has multiple user-assigned identities you will need to -unset \f[C]env_auth\f[R] and set \f[C]use_msi\f[R] instead. -See the \f[C]use_msi\f[R] section. -.SS Env Auth: 3. Azure CLI credentials (as used by the az tool) -.PP -Credentials created with the \f[C]az\f[R] tool can be picked up using -\f[C]env_auth\f[R]. -.PP +unset \[ga]env_auth\[ga] and set \[ga]use_msi\[ga] instead. See the [\[ga]use_msi\[ga] +section](#use_msi). + +##### Env Auth: 3. Azure CLI credentials (as used by the az tool) + +Credentials created with the \[ga]az\[ga] tool can be picked up using \[ga]env_auth\[ga]. + For example if you were to login with a service principal like this: -.IP -.nf -\f[C] -az login --service-principal -u XXX -p XXX --tenant XXX -\f[R] -.fi -.PP + + az login --service-principal -u XXX -p XXX --tenant XXX + Then you could access rclone resources like this: -.IP -.nf -\f[C] -rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER -\f[R] -.fi -.PP + + rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER + Or -.IP -.nf -\f[C] -rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER -\f[R] -.fi -.PP -Which is analogous to using the \f[C]az\f[R] tool: -.IP -.nf -\f[C] -az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login -\f[R] -.fi -.SS Account and Shared Key -.PP -This is the most straight forward and least flexible way. -Just fill in the \f[C]account\f[R] and \f[C]key\f[R] lines and leave the -rest blank. -.SS SAS URL -.PP + + rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER + +Which is analogous to using the \[ga]az\[ga] tool: + + az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login + +#### Account and Shared Key + +This is the most straight forward and least flexible way. Just fill +in the \[ga]account\[ga] and \[ga]key\[ga] lines and leave the rest blank. + +#### SAS URL + This can be an account level SAS URL or container level SAS URL. -.PP -To use it leave \f[C]account\f[R] and \f[C]key\f[R] blank and fill in -\f[C]sas_url\f[R]. -.PP -An account level SAS URL or container level SAS URL can be obtained from -the Azure portal or the Azure Storage Explorer. -To get a container level SAS URL right click on a container in the Azure -Blob explorer in the Azure portal. -.PP + +To use it leave \[ga]account\[ga] and \[ga]key\[ga] blank and fill in \[ga]sas_url\[ga]. + +An account level SAS URL or container level SAS URL can be obtained +from the Azure portal or the Azure Storage Explorer. To get a +container level SAS URL right click on a container in the Azure Blob +explorer in the Azure portal. + If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g. -.IP -.nf -\f[C] -rclone ls azureblob:container -\f[R] -.fi -.PP -You can also list the single container from the root. -This will only show the container specified by the SAS URL. -.IP -.nf -\f[C] -$ rclone lsd azureblob: -container/ -\f[R] -.fi -.PP + + rclone ls azureblob:container + +You can also list the single container from the root. This will only +show the container specified by the SAS URL. + + $ rclone lsd azureblob: + container/ + Note that you can\[aq]t see or access any other containers - this will fail -.IP -.nf -\f[C] -rclone ls azureblob:othercontainer -\f[R] -.fi -.PP + + rclone ls azureblob:othercontainer + Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server. -.SS Service principal with client secret -.PP -If these variables are set, rclone will authenticate with a service -principal with a client secret. -.IP \[bu] 2 -\f[C]tenant\f[R]: ID of the service principal\[aq]s tenant. -Also called its \[dq]directory\[dq] ID. -.IP \[bu] 2 -\f[C]client_id\f[R]: the service principal\[aq]s client ID -.IP \[bu] 2 -\f[C]client_secret\f[R]: one of the service principal\[aq]s client -secrets -.PP + +#### Service principal with client secret + +If these variables are set, rclone will authenticate with a service principal with a client secret. + +- \[ga]tenant\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID. +- \[ga]client_id\[ga]: the service principal\[aq]s client ID +- \[ga]client_secret\[ga]: one of the service principal\[aq]s client secrets + The credentials can also be placed in a file using the -\f[C]service_principal_file\f[R] configuration option. -.SS Service principal with certificate -.PP -If these variables are set, rclone will authenticate with a service -principal with certificate. -.IP \[bu] 2 -\f[C]tenant\f[R]: ID of the service principal\[aq]s tenant. -Also called its \[dq]directory\[dq] ID. -.IP \[bu] 2 -\f[C]client_id\f[R]: the service principal\[aq]s client ID -.IP \[bu] 2 -\f[C]client_certificate_path\f[R]: path to a PEM or PKCS12 certificate -file including the private key. -.IP \[bu] 2 -\f[C]client_certificate_password\f[R]: (optional) password for the -certificate file. -.IP \[bu] 2 -\f[C]client_send_certificate_chain\f[R]: (optional) Specifies whether an -authentication request will include an x5c header to support subject -name / issuer based authentication. -When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests -include the x5c header. -.PP -\f[B]NB\f[R] \f[C]client_certificate_password\f[R] must be obscured - -see rclone obscure (https://rclone.org/commands/rclone_obscure/). -.SS User with username and password -.PP -If these variables are set, rclone will authenticate with username and -password. -.IP \[bu] 2 -\f[C]tenant\f[R]: (optional) tenant to authenticate in. -Defaults to \[dq]organizations\[dq]. -.IP \[bu] 2 -\f[C]client_id\f[R]: client ID of the application the user will -authenticate to -.IP \[bu] 2 -\f[C]username\f[R]: a username (usually an email address) -.IP \[bu] 2 -\f[C]password\f[R]: the user\[aq]s password -.PP -Microsoft doesn\[aq]t recommend this kind of authentication, because -it\[aq]s less secure than other authentication flows. -This method is not interactive, so it isn\[aq]t compatible with any form -of multi-factor authentication, and the application must already have -user or admin consent. -This credential can only authenticate work and school accounts; it -can\[aq]t authenticate Microsoft accounts. -.PP -\f[B]NB\f[R] \f[C]password\f[R] must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.SS Managed Service Identity Credentials -.PP -If \f[C]use_msi\f[R] is set then managed service identity credentials -are used. -This authentication only works when running in an Azure service. -\f[C]env_auth\f[R] needs to be unset to use this. -.PP +\[ga]service_principal_file\[ga] configuration option. + +#### Service principal with certificate + +If these variables are set, rclone will authenticate with a service principal with certificate. + +- \[ga]tenant\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID. +- \[ga]client_id\[ga]: the service principal\[aq]s client ID +- \[ga]client_certificate_path\[ga]: path to a PEM or PKCS12 certificate file including the private key. +- \[ga]client_certificate_password\[ga]: (optional) password for the certificate file. +- \[ga]client_send_certificate_chain\[ga]: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests include the x5c header. + +**NB** \[ga]client_certificate_password\[ga] must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +#### User with username and password + +If these variables are set, rclone will authenticate with username and password. + +- \[ga]tenant\[ga]: (optional) tenant to authenticate in. Defaults to \[dq]organizations\[dq]. +- \[ga]client_id\[ga]: client ID of the application the user will authenticate to +- \[ga]username\[ga]: a username (usually an email address) +- \[ga]password\[ga]: the user\[aq]s password + +Microsoft doesn\[aq]t recommend this kind of authentication, because it\[aq]s +less secure than other authentication flows. This method is not +interactive, so it isn\[aq]t compatible with any form of multi-factor +authentication, and the application must already have user or admin +consent. This credential can only authenticate work and school +accounts; it can\[aq]t authenticate Microsoft accounts. + +**NB** \[ga]password\[ga] must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +#### Managed Service Identity Credentials {#use_msi} + +If \[ga]use_msi\[ga] is set then managed service identity credentials are +used. This authentication only works when running in an Azure service. +\[ga]env_auth\[ga] needs to be unset to use this. + However if you have multiple user identities to choose from these must -be explicitly specified using exactly one of the -\f[C]msi_object_id\f[R], \f[C]msi_client_id\f[R], or -\f[C]msi_mi_res_id\f[R] parameters. -.PP -If none of \f[C]msi_object_id\f[R], \f[C]msi_client_id\f[R], or -\f[C]msi_mi_res_id\f[R] is set, this is is equivalent to using -\f[C]env_auth\f[R]. -.SS Standard options -.PP -Here are the Standard options specific to azureblob (Microsoft Azure -Blob Storage). -.SS --azureblob-account -.PP +be explicitly specified using exactly one of the \[ga]msi_object_id\[ga], +\[ga]msi_client_id\[ga], or \[ga]msi_mi_res_id\[ga] parameters. + +If none of \[ga]msi_object_id\[ga], \[ga]msi_client_id\[ga], or \[ga]msi_mi_res_id\[ga] is +set, this is is equivalent to using \[ga]env_auth\[ga]. + + +### Standard options + +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). + +#### --azureblob-account + Azure Storage Account Name. -.PP + Set this to the Azure Storage Account Name in use. -.PP + Leave blank to use SAS URL or Emulator, otherwise it needs to be set. -.PP + If this is blank and if env_auth is set it will be read from the -environment variable \f[C]AZURE_STORAGE_ACCOUNT_NAME\f[R] if possible. -.PP +environment variable \[ga]AZURE_STORAGE_ACCOUNT_NAME\[ga] if possible. + + Properties: -.IP \[bu] 2 -Config: account -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ACCOUNT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-env-auth -.PP + +- Config: account +- Env Var: RCLONE_AZUREBLOB_ACCOUNT +- Type: string +- Required: false + +#### --azureblob-env-auth + Read credentials from runtime (environment variables, CLI or MSI). -.PP -See the authentication docs for full info. -.PP + +See the [authentication docs](/azureblob#authentication) for full info. + Properties: -.IP \[bu] 2 -Config: env_auth -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ENV_AUTH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-key -.PP + +- Config: env_auth +- Env Var: RCLONE_AZUREBLOB_ENV_AUTH +- Type: bool +- Default: false + +#### --azureblob-key + Storage Account Shared Key. -.PP + Leave blank to use SAS URL or Emulator. -.PP + Properties: -.IP \[bu] 2 -Config: key -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-sas-url -.PP + +- Config: key +- Env Var: RCLONE_AZUREBLOB_KEY +- Type: string +- Required: false + +#### --azureblob-sas-url + SAS URL for container level access only. -.PP + Leave blank if using account/key or Emulator. -.PP + Properties: -.IP \[bu] 2 -Config: sas_url -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_SAS_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-tenant -.PP -ID of the service principal\[aq]s tenant. -Also called its directory ID. -.PP -Set this if using - Service principal with client secret - Service -principal with certificate - User with username and password -.PP + +- Config: sas_url +- Env Var: RCLONE_AZUREBLOB_SAS_URL +- Type: string +- Required: false + +#### --azureblob-tenant + +ID of the service principal\[aq]s tenant. Also called its directory ID. + +Set this if using +- Service principal with client secret +- Service principal with certificate +- User with username and password + + Properties: -.IP \[bu] 2 -Config: tenant -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_TENANT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-client-id -.PP + +- Config: tenant +- Env Var: RCLONE_AZUREBLOB_TENANT +- Type: string +- Required: false + +#### --azureblob-client-id + The ID of the client in use. -.PP -Set this if using - Service principal with client secret - Service -principal with certificate - User with username and password -.PP + +Set this if using +- Service principal with client secret +- Service principal with certificate +- User with username and password + + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_AZUREBLOB_CLIENT_ID +- Type: string +- Required: false + +#### --azureblob-client-secret + One of the service principal\[aq]s client secrets -.PP -Set this if using - Service principal with client secret -.PP + +Set this if using +- Service principal with client secret + + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-client-certificate-path -.PP + +- Config: client_secret +- Env Var: RCLONE_AZUREBLOB_CLIENT_SECRET +- Type: string +- Required: false + +#### --azureblob-client-certificate-path + Path to a PEM or PKCS12 certificate file including the private key. -.PP -Set this if using - Service principal with certificate -.PP + +Set this if using +- Service principal with certificate + + Properties: -.IP \[bu] 2 -Config: client_certificate_path -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-client-certificate-password -.PP + +- Config: client_certificate_path +- Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH +- Type: string +- Required: false + +#### --azureblob-client-certificate-password + Password for the certificate file (optional). -.PP -Optionally set this if using - Service principal with certificate -.PP + +Optionally set this if using +- Service principal with certificate + And the certificate has a password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: client_certificate_password -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP -Here are the Advanced options specific to azureblob (Microsoft Azure -Blob Storage). -.SS --azureblob-client-send-certificate-chain -.PP + +- Config: client_certificate_password +- Env Var: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). + +#### --azureblob-client-send-certificate-chain + Send the certificate chain when using certificate auth. -.PP + Specifies whether an authentication request will include an x5c header -to support subject name / issuer based authentication. -When set to true, authentication requests include the x5c header. -.PP -Optionally set this if using - Service principal with certificate -.PP +to support subject name / issuer based authentication. When set to +true, authentication requests include the x5c header. + +Optionally set this if using +- Service principal with certificate + + Properties: -.IP \[bu] 2 -Config: client_send_certificate_chain -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-username -.PP + +- Config: client_send_certificate_chain +- Env Var: RCLONE_AZUREBLOB_CLIENT_SEND_CERTIFICATE_CHAIN +- Type: bool +- Default: false + +#### --azureblob-username + User name (usually an email address) -.PP -Set this if using - User with username and password -.PP + +Set this if using +- User with username and password + + Properties: -.IP \[bu] 2 -Config: username -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_USERNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-password -.PP + +- Config: username +- Env Var: RCLONE_AZUREBLOB_USERNAME +- Type: string +- Required: false + +#### --azureblob-password + The user\[aq]s password -.PP -Set this if using - User with username and password -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +Set this if using +- User with username and password + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-service-principal-file -.PP + +- Config: password +- Env Var: RCLONE_AZUREBLOB_PASSWORD +- Type: string +- Required: false + +#### --azureblob-service-principal-file + Path to file containing credentials for use with a service principal. -.PP -Leave blank normally. -Needed only if you want to use a service principal instead of -interactive login. -.IP -.nf -\f[C] -$ az ad sp create-for-rbac --name \[dq]\[dq] \[rs] - --role \[dq]Storage Blob Data Owner\[dq] \[rs] - --scopes \[dq]/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/\[dq] \[rs] - > azure-principal.json -\f[R] -.fi -.PP -See \[dq]Create an Azure service -principal\[dq] (https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) -and \[dq]Assign an Azure role for access to blob -data\[dq] (https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) -pages for more details. -.PP + +Leave blank normally. Needed only if you want to use a service principal instead of interactive login. + + $ az ad sp create-for-rbac --name \[dq]\[dq] \[rs] + --role \[dq]Storage Blob Data Owner\[dq] \[rs] + --scopes \[dq]/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/\[dq] \[rs] + > azure-principal.json + +See [\[dq]Create an Azure service principal\[dq]](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and [\[dq]Assign an Azure role for access to blob data\[dq]](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details. + It may be more convenient to put the credentials directly into the -rclone config file under the \f[C]client_id\f[R], \f[C]tenant\f[R] and -\f[C]client_secret\f[R] keys instead of setting -\f[C]service_principal_file\f[R]. -.PP +rclone config file under the \[ga]client_id\[ga], \[ga]tenant\[ga] and \[ga]client_secret\[ga] +keys instead of setting \[ga]service_principal_file\[ga]. + + Properties: -.IP \[bu] 2 -Config: service_principal_file -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-use-msi -.PP + +- Config: service_principal_file +- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE +- Type: string +- Required: false + +#### --azureblob-use-msi + Use a managed service identity to authenticate (only works in Azure). -.PP -When true, use a managed service -identity (https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/) + +When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/) to authenticate to Azure Storage instead of a SAS token or account key. -.PP -If the VM(SS) on which this program is running has a system-assigned -identity, it will be used by default. -If the resource has no system-assigned but exactly one user-assigned -identity, the user-assigned identity will be used by default. -If the resource has multiple user-assigned identities, the identity to -use must be explicitly specified using exactly one of the msi_object_id, + +If the VM(SS) on which this program is running has a system-assigned identity, it will +be used by default. If the resource has no system-assigned but exactly one user-assigned identity, +the user-assigned identity will be used by default. If the resource has multiple user-assigned +identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters. -.PP + Properties: -.IP \[bu] 2 -Config: use_msi -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_USE_MSI -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-msi-object-id -.PP + +- Config: use_msi +- Env Var: RCLONE_AZUREBLOB_USE_MSI +- Type: bool +- Default: false + +#### --azureblob-msi-object-id + Object ID of the user-assigned MSI to use, if any. -.PP + Leave blank if msi_client_id or msi_mi_res_id specified. -.PP + Properties: -.IP \[bu] 2 -Config: msi_object_id -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-msi-client-id -.PP + +- Config: msi_object_id +- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID +- Type: string +- Required: false + +#### --azureblob-msi-client-id + Object ID of the user-assigned MSI to use, if any. -.PP + Leave blank if msi_object_id or msi_mi_res_id specified. -.PP + Properties: -.IP \[bu] 2 -Config: msi_client_id -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-msi-mi-res-id -.PP + +- Config: msi_client_id +- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID +- Type: string +- Required: false + +#### --azureblob-msi-mi-res-id + Azure resource ID of the user-assigned MSI to use, if any. -.PP + Leave blank if msi_client_id or msi_object_id specified. -.PP + Properties: -.IP \[bu] 2 -Config: msi_mi_res_id -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-use-emulator -.PP + +- Config: msi_mi_res_id +- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID +- Type: string +- Required: false + +#### --azureblob-use-emulator + Uses local storage emulator if provided as \[aq]true\[aq]. -.PP + Leave blank if using real azure storage endpoint. -.PP + Properties: -.IP \[bu] 2 -Config: use_emulator -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_USE_EMULATOR -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-endpoint -.PP + +- Config: use_emulator +- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR +- Type: bool +- Default: false + +#### --azureblob-endpoint + Endpoint for the service. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-upload-cutoff -.PP + +- Config: endpoint +- Env Var: RCLONE_AZUREBLOB_ENDPOINT +- Type: string +- Required: false + +#### --azureblob-upload-cutoff + Cutoff for switching to chunked upload (<= 256 MiB) (deprecated). -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-chunk-size -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF +- Type: string +- Required: false + +#### --azureblob-chunk-size + Upload chunk size. -.PP + Note that this is stored in memory and there may be up to -\[dq]--transfers\[dq] * \[dq]--azureblob-upload-concurrency\[dq] chunks -stored at once in memory. -.PP +\[dq]--transfers\[dq] * \[dq]--azureblob-upload-concurrency\[dq] chunks stored at once +in memory. + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 4Mi -.SS --azureblob-upload-concurrency -.PP + +- Config: chunk_size +- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE +- Type: SizeSuffix +- Default: 4Mi + +#### --azureblob-upload-concurrency + Concurrency for multipart uploads. -.PP + This is the number of chunks of the same file that are uploaded concurrently. -.PP -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. -.PP + +If you are uploading small numbers of large files over high-speed +links and these uploads do not fully utilize your bandwidth, then +increasing this may help to speed up the transfers. + In tests, upload speed increases almost linearly with upload -concurrency. -For example to fill a gigabit pipe it may be necessary to raise this to -64. -Note that this will use more memory. -.PP +concurrency. For example to fill a gigabit pipe it may be necessary to +raise this to 64. Note that this will use more memory. + Note that chunks are stored in memory and there may be up to -\[dq]--transfers\[dq] * \[dq]--azureblob-upload-concurrency\[dq] chunks -stored at once in memory. -.PP +\[dq]--transfers\[dq] * \[dq]--azureblob-upload-concurrency\[dq] chunks stored at once +in memory. + Properties: -.IP \[bu] 2 -Config: upload_concurrency -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 16 -.SS --azureblob-list-chunk -.PP + +- Config: upload_concurrency +- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY +- Type: int +- Default: 16 + +#### --azureblob-list-chunk + Size of blob list. -.PP -This sets the number of blobs requested in each listing chunk. -Default is the maximum, 5000. -\[dq]List blobs\[dq] requests are permitted 2 minutes per megabyte to -complete. -If an operation is taking longer than 2 minutes per megabyte on average, -it will time out ( -source (https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) -). -This can be used to limit the number of blobs items to return, to avoid -the time out. -.PP + +This sets the number of blobs requested in each listing chunk. Default +is the maximum, 5000. \[dq]List blobs\[dq] requests are permitted 2 minutes +per megabyte to complete. If an operation is taking longer than 2 +minutes per megabyte on average, it will time out ( +[source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) +). This can be used to limit the number of blobs items to return, to +avoid the time out. + Properties: -.IP \[bu] 2 -Config: list_chunk -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_LIST_CHUNK -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 5000 -.SS --azureblob-access-tier -.PP + +- Config: list_chunk +- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK +- Type: int +- Default: 5000 + +#### --azureblob-access-tier + Access tier of blob: hot, cool or archive. -.PP -Archived blobs can be restored by setting access tier to hot or cool. -Leave blank if you intend to use default access tier, which is set at -account level -.PP -If there is no \[dq]access tier\[dq] specified, rclone doesn\[aq]t apply -any tier. -rclone performs \[dq]Set Tier\[dq] operation on blobs while uploading, -if objects are not modified, specifying \[dq]access tier\[dq] to new one -will have no effect. -If blobs are in \[dq]archive tier\[dq] at remote, trying to perform data -transfer operations from remote will not be allowed. -User should first restore by tiering blob to \[dq]Hot\[dq] or -\[dq]Cool\[dq]. -.PP + +Archived blobs can be restored by setting access tier to hot or +cool. Leave blank if you intend to use default access tier, which is +set at account level + +If there is no \[dq]access tier\[dq] specified, rclone doesn\[aq]t apply any tier. +rclone performs \[dq]Set Tier\[dq] operation on blobs while uploading, if objects +are not modified, specifying \[dq]access tier\[dq] to new one will have no effect. +If blobs are in \[dq]archive tier\[dq] at remote, trying to perform data transfer +operations from remote will not be allowed. User should first restore by +tiering blob to \[dq]Hot\[dq] or \[dq]Cool\[dq]. + Properties: -.IP \[bu] 2 -Config: access_tier -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ACCESS_TIER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --azureblob-archive-tier-delete -.PP + +- Config: access_tier +- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER +- Type: string +- Required: false + +#### --azureblob-archive-tier-delete + Delete archive tier blobs before overwriting. -.PP -Archive tier blobs cannot be updated. -So without this flag, if you attempt to update an archive tier blob, -then rclone will produce the error: -.IP -.nf -\f[C] -can\[aq]t update archive tier blob without --azureblob-archive-tier-delete -\f[R] -.fi -.PP + +Archive tier blobs cannot be updated. So without this flag, if you +attempt to update an archive tier blob, then rclone will produce the +error: + + can\[aq]t update archive tier blob without --azureblob-archive-tier-delete + With this flag set then before rclone attempts to overwrite an archive tier blob, it will delete the existing blob before uploading its -replacement. -This has the potential for data loss if the upload fails (unlike -updating a normal blob) and also may cost more since deleting archive -tier blobs early may be chargable. -.PP +replacement. This has the potential for data loss if the upload fails +(unlike updating a normal blob) and also may cost more since deleting +archive tier blobs early may be chargable. + + Properties: -.IP \[bu] 2 -Config: archive_tier_delete -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-disable-checksum -.PP + +- Config: archive_tier_delete +- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE +- Type: bool +- Default: false + +#### --azureblob-disable-checksum + Don\[aq]t store MD5 checksum with object metadata. -.PP + Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. -This is great for data integrity checking but can cause long delays for -large files to start uploading. -.PP +uploading it so it can add it to metadata on the object. This is great +for data integrity checking but can cause long delays for large files +to start uploading. + Properties: -.IP \[bu] 2 -Config: disable_checksum -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-memory-pool-flush-time -.PP -How often internal memory buffer pools will be flushed. -.PP -Uploads which requires additional buffers (f.e multipart) will use -memory pool for allocations. -This option controls how often unused buffers will be removed from the -pool. -.PP + +- Config: disable_checksum +- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM +- Type: bool +- Default: false + +#### --azureblob-memory-pool-flush-time + +How often internal memory buffer pools will be flushed. (no longer used) + Properties: -.IP \[bu] 2 -Config: memory_pool_flush_time -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --azureblob-memory-pool-use-mmap -.PP -Whether to use mmap buffers in internal memory pool. -.PP + +- Config: memory_pool_flush_time +- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME +- Type: Duration +- Default: 1m0s + +#### --azureblob-memory-pool-use-mmap + +Whether to use mmap buffers in internal memory pool. (no longer used) + Properties: -.IP \[bu] 2 -Config: memory_pool_use_mmap -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-encoding -.PP + +- Config: memory_pool_use_mmap +- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP +- Type: bool +- Default: false + +#### --azureblob-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 -.SS --azureblob-public-access -.PP + +- Config: encoding +- Env Var: RCLONE_AZUREBLOB_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 + +#### --azureblob-public-access + Public access level of a container: blob or container. -.PP + Properties: -.IP \[bu] 2 -Config: public_access -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -The container and its blobs can be accessed only with an authorized -request. -.IP \[bu] 2 -It\[aq]s a default value. -.RE -.IP \[bu] 2 -\[dq]blob\[dq] -.RS 2 -.IP \[bu] 2 -Blob data within this container can be read via anonymous request. -.RE -.IP \[bu] 2 -\[dq]container\[dq] -.RS 2 -.IP \[bu] 2 -Allow full public read access for container and blob data. -.RE -.RE -.SS --azureblob-directory-markers -.PP -Upload an empty object with a trailing slash when a new directory is -created -.PP + +- Config: public_access +- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - The container and its blobs can be accessed only with an authorized request. + - It\[aq]s a default value. + - \[dq]blob\[dq] + - Blob data within this container can be read via anonymous request. + - \[dq]container\[dq] + - Allow full public read access for container and blob data. + +#### --azureblob-directory-markers + +Upload an empty object with a trailing slash when a new directory is created + Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with \[dq]/\[dq], to persist the folder. -.PP -This object also has the metadata \[dq]hdi_isfolder = true\[dq] to -conform to the Microsoft standard. -.PP + +This object also has the metadata \[dq]hdi_isfolder = true\[dq] to conform to +the Microsoft standard. + + Properties: -.IP \[bu] 2 -Config: directory_markers -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-no-check-container -.PP + +- Config: directory_markers +- Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS +- Type: bool +- Default: false + +#### --azureblob-no-check-container + If set, don\[aq]t attempt to check the container exists or create it. -.PP + This can be useful when trying to minimise the number of transactions rclone does if you know the container exists already. -.PP + + Properties: -.IP \[bu] 2 -Config: no_check_container -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_NO_CHECK_CONTAINER -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --azureblob-no-head-object -.PP + +- Config: no_check_container +- Env Var: RCLONE_AZUREBLOB_NO_CHECK_CONTAINER +- Type: bool +- Default: false + +#### --azureblob-no-head-object + If set, do not do HEAD before GET when getting objects. -.PP + Properties: -.IP \[bu] 2 -Config: no_head_object -.IP \[bu] 2 -Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS Custom upload headers -.PP -You can set custom upload headers with the \f[C]--header-upload\f[R] -flag. -.IP \[bu] 2 -Cache-Control -.IP \[bu] 2 -Content-Disposition -.IP \[bu] 2 -Content-Encoding -.IP \[bu] 2 -Content-Language -.IP \[bu] 2 -Content-Type -.PP -Eg \f[C]--header-upload \[dq]Content-Type: text/potato\[dq]\f[R] -.SS Limitations -.PP + +- Config: no_head_object +- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT +- Type: bool +- Default: false + + + +### Custom upload headers + +You can set custom upload headers with the \[ga]--header-upload\[ga] flag. + +- Cache-Control +- Content-Disposition +- Content-Encoding +- Content-Language +- Content-Type + +Eg \[ga]--header-upload \[dq]Content-Type: text/potato\[dq]\[ga] + +## Limitations + MD5 sums are only uploaded with chunked files if the source has an MD5 -sum. -This will always be the case for a local to azure copy. -.PP -\f[C]rclone about\f[R] is not supported by the Microsoft Azure Blob -storage backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SS Azure Storage Emulator Support -.PP -You can run rclone with the storage emulator (usually -\f[I]azurite\f[R]). -.PP -To do this, just set up a new remote with \f[C]rclone config\f[R] -following the instructions in the introduction and set -\f[C]use_emulator\f[R] in the advanced settings as \f[C]true\f[R]. -You do not need to provide a default account name nor an account key. -But you can override them in the \f[C]account\f[R] and \f[C]key\f[R] -options. -(Prior to v1.61 they were hard coded to \f[I]azurite\f[R]\[aq]s -\f[C]devstoreaccount1\f[R].) -.PP +sum. This will always be the case for a local to azure copy. + +\[ga]rclone about\[ga] is not supported by the Microsoft Azure Blob storage backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +## Azure Storage Emulator Support + +You can run rclone with the storage emulator (usually _azurite_). + +To do this, just set up a new remote with \[ga]rclone config\[ga] following +the instructions in the introduction and set \[ga]use_emulator\[ga] in the +advanced settings as \[ga]true\[ga]. You do not need to provide a default +account name nor an account key. But you can override them in the +\[ga]account\[ga] and \[ga]key\[ga] options. (Prior to v1.61 they were hard coded to +_azurite_\[aq]s \[ga]devstoreaccount1\[ga].) + Also, if you want to access a storage emulator instance running on a -different machine, you can override the \f[C]endpoint\f[R] parameter in -the advanced settings, setting it to -\f[C]http(s)://:/devstoreaccount1\f[R] (e.g. -\f[C]http://10.254.2.5:10000/devstoreaccount1\f[R]). -.SH Microsoft OneDrive -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -The initial setup for OneDrive involves getting a token from Microsoft -which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +different machine, you can override the \[ga]endpoint\[ga] parameter in the +advanced settings, setting it to +\[ga]http(s)://:/devstoreaccount1\[ga] +(e.g. \[ga]http://10.254.2.5:10000/devstoreaccount1\[ga]). + +# Microsoft OneDrive + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +The initial setup for OneDrive involves getting a token from +Microsoft which you need to do in your browser. \[ga]rclone config\[ga] walks +you through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] -e) Edit existing remote -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Microsoft OneDrive - \[rs] \[dq]onedrive\[dq] -[snip] -Storage> onedrive -Microsoft App Client Id -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -client_id> -Microsoft App Client Secret -Leave blank normally. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -client_secret> -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code -Choose a number from below, or type in an existing value - 1 / OneDrive Personal or Business - \[rs] \[dq]onedrive\[dq] - 2 / Sharepoint site - \[rs] \[dq]sharepoint\[dq] - 3 / Type in driveID - \[rs] \[dq]driveid\[dq] - 4 / Type in SiteID - \[rs] \[dq]siteid\[dq] - 5 / Search a Sharepoint site - \[rs] \[dq]search\[dq] -Your choice> 1 -Found 1 drives, please select the one you want to use: -0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk -Chose drive to use:> 0 -Found drive \[aq]root\[aq] of type \[aq]business\[aq], URL: https://org-my.sharepoint.com/personal/you/Documents -Is that okay? -y) Yes -n) No -y/n> y --------------------- -[remote] -type = onedrive -token = {\[dq]access_token\[dq]:\[dq]youraccesstoken\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]yourrefreshtoken\[dq],\[dq]expiry\[dq]:\[dq]2018-08-26T22:39:52.486512262+08:00\[dq]} +\f[R] +.fi +.IP "e)" 3 +Edit existing remote +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> n name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Microsoft OneDrive \ \[dq]onedrive\[dq] [snip] Storage> onedrive +Microsoft App Client Id Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +client_id> Microsoft App Client Secret Leave blank normally. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +client_secret> Edit advanced config? +(y/n) +.IP "l)" 3 +Yes +.IP "m)" 3 +No y/n> n Remote config Use web browser to automatically authenticate +rclone with remote? +.IP \[bu] 2 +Say Y if the machine running rclone has a web browser you can use +.IP \[bu] 2 +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +.IP "y)" 3 +Yes +.IP "z)" 3 +No y/n> y If your browser doesn\[aq]t open automatically go to the +following link: http://127.0.0.1:53682/auth Log in and authorize rclone +for access Waiting for code... +Got code Choose a number from below, or type in an existing value 1 / +OneDrive Personal or Business \ \[dq]onedrive\[dq] 2 / Sharepoint site +\ \[dq]sharepoint\[dq] 3 / Type in driveID \ \[dq]driveid\[dq] 4 / Type +in SiteID \ \[dq]siteid\[dq] 5 / Search a Sharepoint site +\ \[dq]search\[dq] Your choice> 1 Found 1 drives, please select the one +you want to use: 0: OneDrive (business) +id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk Chose +drive to use:> 0 Found drive \[aq]root\[aq] of type \[aq]business\[aq], +URL: https://org-my.sharepoint.com/personal/you/Documents Is that okay? +.IP "a)" 3 +Yes +.IP "b)" 3 +No y/n> y -------------------- [remote] type = onedrive token = +{\[dq]access_token\[dq]:\[dq]youraccesstoken\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]yourrefreshtoken\[dq],\[dq]expiry\[dq]:\[dq]2018-08-26T22:39:52.486512262+08:00\[dq]} drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk -drive_type = business --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP +drive_type = business -------------------- +.IP "c)" 3 +Yes this is OK +.IP "d)" 3 +Edit this remote +.IP "e)" 3 +Delete this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Microsoft. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP +token as returned from Microsoft. This only runs from the moment it +opens your browser to the moment you get back the verification +code. This is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require +you to unblock it temporarily if you are running a host firewall. + +Once configured you can then use \[ga]rclone\[ga] like this, + List directories in top level of your OneDrive -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your OneDrive -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to an OneDrive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Getting your own Client ID and Key -.PP -rclone uses a default Client ID when talking to OneDrive, unless a -custom \f[C]client_id\f[R] is specified in the config. -The default Client ID and Key are shared by all rclone users when -performing requests. -.PP -You may choose to create and use your own Client ID, in case the default -one does not work well for you. + + rclone copy /home/source remote:backup + +### Getting your own Client ID and Key + +rclone uses a default Client ID when talking to OneDrive, unless a custom \[ga]client_id\[ga] is specified in the config. +The default Client ID and Key are shared by all rclone users when performing requests. + +You may choose to create and use your own Client ID, in case the default one does not work well for you. For example, you might see throttling. -.SS Creating Client ID for OneDrive Personal -.PP + +#### Creating Client ID for OneDrive Personal + To create your own Client ID, please follow these steps: -.IP "1." 3 -Open -https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade -and then click \f[C]New registration\f[R]. -.IP "2." 3 -Enter a name for your app, choose account type -\f[C]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\f[R], -select \f[C]Web\f[R] in \f[C]Redirect URI\f[R], then type (do not copy -and paste) \f[C]http://localhost:53682/\f[R] and click Register. -Copy and keep the \f[C]Application (client) ID\f[R] under the app name -for later use. -.IP "3." 3 -Under \f[C]manage\f[R] select \f[C]Certificates & secrets\f[R], click -\f[C]New client secret\f[R]. -Enter a description (can be anything) and set \f[C]Expires\f[R] to 24 -months. -Copy and keep that secret \f[I]Value\f[R] for later use (you -\f[I]won\[aq]t\f[R] be able to see this value afterwards). -.IP "4." 3 -Under \f[C]manage\f[R] select \f[C]API permissions\f[R], click -\f[C]Add a permission\f[R] and select \f[C]Microsoft Graph\f[R] then -select \f[C]delegated permissions\f[R]. -.IP "5." 3 -Search and select the following permissions: \f[C]Files.Read\f[R], -\f[C]Files.ReadWrite\f[R], \f[C]Files.Read.All\f[R], -\f[C]Files.ReadWrite.All\f[R], \f[C]offline_access\f[R], -\f[C]User.Read\f[R] and \f[C]Sites.Read.All\f[R] (if custom access -scopes are configured, select the permissions accordingly). -Once selected click \f[C]Add permissions\f[R] at the bottom. -.PP -Now the application is complete. -Run \f[C]rclone config\f[R] to create or edit a OneDrive remote. -Supply the app ID and password as Client ID and Secret, respectively. -rclone will walk you through the remaining steps. -.PP -The access_scopes option allows you to configure the permissions -requested by rclone. -See Microsoft -Docs (https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) -for more information about the different scopes. -.PP -The \f[C]Sites.Read.All\f[R] permission is required if you need to -search SharePoint sites when configuring the -remote (https://github.com/rclone/rclone/pull/5883). -However, if that permission is not assigned, you need to exclude -\f[C]Sites.Read.All\f[R] from your access scopes or set -\f[C]disable_site_permission\f[R] option to true in the advanced -options. -.SS Creating Client ID for OneDrive Business -.PP -The steps for OneDrive Personal may or may not work for OneDrive -Business, depending on the security settings of the organization. + +1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click \[ga]New registration\[ga]. +2. Enter a name for your app, choose account type \[ga]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\[ga], select \[ga]Web\[ga] in \[ga]Redirect URI\[ga], then type (do not copy and paste) \[ga]http://localhost:53682/\[ga] and click Register. Copy and keep the \[ga]Application (client) ID\[ga] under the app name for later use. +3. Under \[ga]manage\[ga] select \[ga]Certificates & secrets\[ga], click \[ga]New client secret\[ga]. Enter a description (can be anything) and set \[ga]Expires\[ga] to 24 months. Copy and keep that secret _Value_ for later use (you _won\[aq]t_ be able to see this value afterwards). +4. Under \[ga]manage\[ga] select \[ga]API permissions\[ga], click \[ga]Add a permission\[ga] and select \[ga]Microsoft Graph\[ga] then select \[ga]delegated permissions\[ga]. +5. Search and select the following permissions: \[ga]Files.Read\[ga], \[ga]Files.ReadWrite\[ga], \[ga]Files.Read.All\[ga], \[ga]Files.ReadWrite.All\[ga], \[ga]offline_access\[ga], \[ga]User.Read\[ga] and \[ga]Sites.Read.All\[ga] (if custom access scopes are configured, select the permissions accordingly). Once selected click \[ga]Add permissions\[ga] at the bottom. + +Now the application is complete. Run \[ga]rclone config\[ga] to create or edit a OneDrive remote. +Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. + +The access_scopes option allows you to configure the permissions requested by rclone. +See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes. + +The \[ga]Sites.Read.All\[ga] permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude \[ga]Sites.Read.All\[ga] from your access scopes or set \[ga]disable_site_permission\[ga] option to true in the advanced options. + +#### Creating Client ID for OneDrive Business + +The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. A common error is that the publisher of the App is not verified. -.PP -You may try to verify you -account (https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), -or try to limit the App to your organization only, as shown below. -.IP "1." 3 -Make sure to create the App with your business account. -.IP "2." 3 -Follow the steps above to create an App. -However, we need a different account type here: -\f[C]Accounts in this organizational directory only (*** - Single tenant)\f[R]. -Note that you can also change the account type after creating the App. -.IP "3." 3 -Find the tenant -ID (https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) -of your organization. -.IP "4." 3 -In the rclone config, set \f[C]auth_url\f[R] to -\f[C]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize\f[R]. -.IP "5." 3 -In the rclone config, set \f[C]token_url\f[R] to -\f[C]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token\f[R]. -.PP -Note: If you have a special region, you may need a different host in -step 4 and 5. -Here are some -hints (https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). -.SS Modification time and hashes -.PP + +You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below. + +1. Make sure to create the App with your business account. +2. Follow the steps above to create an App. However, we need a different account type here: \[ga]Accounts in this organizational directory only (*** - Single tenant)\[ga]. Note that you can also change the account type after creating the App. +3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization. +4. In the rclone config, set \[ga]auth_url\[ga] to \[ga]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize\[ga]. +5. In the rclone config, set \[ga]token_url\[ga] to \[ga]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token\[ga]. + +Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). + + +### Modification time and hashes + OneDrive allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -.PP +second. These will be used to detect whether objects need syncing or +not. + OneDrive Personal, OneDrive for Business and Sharepoint Server support -QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). -.PP -Before rclone 1.62 the default hash for Onedrive Personal was -\f[C]SHA1\f[R]. +[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). + +Before rclone 1.62 the default hash for Onedrive Personal was \[ga]SHA1\[ga]. For rclone 1.62 and above the default for all Onedrive backends is -\f[C]QuickXorHash\f[R]. -.PP -Starting from July 2023 \f[C]SHA1\f[R] support is being phased out in -Onedrive Personal in favour of \f[C]QuickXorHash\f[R]. -If necessary the \f[C]--onedrive-hash-type\f[R] flag (or -\f[C]hash_type\f[R] config option) can be used to select \f[C]SHA1\f[R] -during the transition period if this is important your workflow. -.PP -For all types of OneDrive you can use the \f[C]--checksum\f[R] flag. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -T{ -* -T}@T{ -0x2A -T}@T{ -\[uFF0A] -T} -T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] -T} -T{ -< -T}@T{ -0x3C -T}@T{ -\[uFF1C] -T} -T{ -> -T}@T{ -0x3E -T}@T{ -\[uFF1E] -T} -T{ -? -T}@T{ -0x3F -T}@T{ -\[uFF1F] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -T{ -| -T}@T{ -0x7C -T}@T{ -\[uFF5C] -T} -.TE -.PP +\[ga]QuickXorHash\[ga]. + +Starting from July 2023 \[ga]SHA1\[ga] support is being phased out in Onedrive +Personal in favour of \[ga]QuickXorHash\[ga]. If necessary the +\[ga]--onedrive-hash-type\[ga] flag (or \[ga]hash_type\[ga] config option) can be used +to select \[ga]SHA1\[ga] during the transition period if this is important +your workflow. + +For all types of OneDrive you can use the \[ga]--checksum\[ga] flag. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[dq] | 0x22 | \[uFF02] | +| * | 0x2A | \[uFF0A] | +| : | 0x3A | \[uFF1A] | +| < | 0x3C | \[uFF1C] | +| > | 0x3E | \[uFF1E] | +| ? | 0x3F | \[uFF1F] | +| \[rs] | 0x5C | \[uFF3C] | +| \[rs]| | 0x7C | \[uFF5C] | + File names can also not end with the following characters. These only get replaced if they are the last character in the name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -T{ -\&. -T}@T{ -0x2E -T}@T{ -\[uFF0E] -T} -.TE -.PP + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | +| . | 0x2E | \[uFF0E] | + File names can also not begin with the following characters. These only get replaced if they are the first character in the name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -T{ -\[ti] -T}@T{ -0x7E -T}@T{ -\[uFF5E] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Deleting files -.PP -Any files you delete with rclone will end up in the trash. -Microsoft doesn\[aq]t provide an API to permanently delete files, nor to -empty the trash, so you will have to do that with one of Microsoft\[aq]s -apps or via the OneDrive website. -.SS Standard options -.PP + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | +| \[ti] | 0x7E | \[uFF5E] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Deleting files + +Any files you delete with rclone will end up in the trash. Microsoft +doesn\[aq]t provide an API to permanently delete files, nor to empty the +trash, so you will have to do that with one of Microsoft\[aq]s apps or via +the OneDrive website. + + +### Standard options + Here are the Standard options specific to onedrive (Microsoft OneDrive). -.SS --onedrive-client-id -.PP + +#### --onedrive-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_ONEDRIVE_CLIENT_ID +- Type: string +- Required: false + +#### --onedrive-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-region -.PP + +- Config: client_secret +- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET +- Type: string +- Required: false + +#### --onedrive-region + Choose national cloud region for OneDrive. -.PP + Properties: -.IP \[bu] 2 -Config: region -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_REGION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]global\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]global\[dq] -.RS 2 -.IP \[bu] 2 -Microsoft Cloud Global -.RE -.IP \[bu] 2 -\[dq]us\[dq] -.RS 2 -.IP \[bu] 2 -Microsoft Cloud for US Government -.RE -.IP \[bu] 2 -\[dq]de\[dq] -.RS 2 -.IP \[bu] 2 -Microsoft Cloud Germany -.RE -.IP \[bu] 2 -\[dq]cn\[dq] -.RS 2 -.IP \[bu] 2 -Azure and Office 365 operated by Vnet Group in China -.RE -.RE -.SS Advanced options -.PP + +- Config: region +- Env Var: RCLONE_ONEDRIVE_REGION +- Type: string +- Default: \[dq]global\[dq] +- Examples: + - \[dq]global\[dq] + - Microsoft Cloud Global + - \[dq]us\[dq] + - Microsoft Cloud for US Government + - \[dq]de\[dq] + - Microsoft Cloud Germany + - \[dq]cn\[dq] + - Azure and Office 365 operated by Vnet Group in China + +### Advanced options + Here are the Advanced options specific to onedrive (Microsoft OneDrive). -.SS --onedrive-token -.PP + +#### --onedrive-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-auth-url -.PP + +- Config: token +- Env Var: RCLONE_ONEDRIVE_TOKEN +- Type: string +- Required: false + +#### --onedrive-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_ONEDRIVE_AUTH_URL +- Type: string +- Required: false + +#### --onedrive-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-chunk-size -.PP -Chunk size to upload files with - must be multiple of 320k (327,680 -bytes). -.PP -Above this size files will be chunked - must be multiple of 320k -(327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you -may encounter -\[dq]Microsoft.SharePoint.Client.InvalidClientQueryException: The -request message is too big.\[dq] Note that the chunks will be buffered -into memory. -.PP + +- Config: token_url +- Env Var: RCLONE_ONEDRIVE_TOKEN_URL +- Type: string +- Required: false + +#### --onedrive-chunk-size + +Chunk size to upload files with - must be multiple of 320k (327,680 bytes). + +Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and +should not exceed 250M (262,144,000 bytes) else you may encounter \[rs]\[dq]Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\[rs]\[dq] +Note that the chunks will be buffered into memory. + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Mi -.SS --onedrive-drive-id -.PP + +- Config: chunk_size +- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 10Mi + +#### --onedrive-drive-id + The ID of the drive to use. -.PP + Properties: -.IP \[bu] 2 -Config: drive_id -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_DRIVE_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-drive-type -.PP + +- Config: drive_id +- Env Var: RCLONE_ONEDRIVE_DRIVE_ID +- Type: string +- Required: false + +#### --onedrive-drive-type + The type of the drive (personal | business | documentLibrary). -.PP + Properties: -.IP \[bu] 2 -Config: drive_type -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-root-folder-id -.PP + +- Config: drive_type +- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE +- Type: string +- Required: false + +#### --onedrive-root-folder-id + ID of the root folder. -.PP + This isn\[aq]t normally needed, but in special circumstances you might -know the folder ID that you wish to access but not be able to get there -through a path traversal. -.PP +know the folder ID that you wish to access but not be able to get +there through a path traversal. + + Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-access-scopes -.PP + +- Config: root_folder_id +- Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID +- Type: string +- Required: false + +#### --onedrive-access-scopes + Set scopes to be requested by rclone. -.PP -Choose or manually enter a custom space separated list with all scopes, -that rclone should request. -.PP + +Choose or manually enter a custom space separated list with all scopes, that rclone should request. + + Properties: -.IP \[bu] 2 -Config: access_scopes -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES -.IP \[bu] 2 -Type: SpaceSepList -.IP \[bu] 2 -Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All -Sites.Read.All offline_access -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All -Sites.Read.All offline_access\[dq] -.RS 2 -.IP \[bu] 2 -Read and write access to all resources -.RE -.IP \[bu] 2 -\[dq]Files.Read Files.Read.All Sites.Read.All offline_access\[dq] -.RS 2 -.IP \[bu] 2 -Read only access to all resources -.RE -.IP \[bu] 2 -\[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All -offline_access\[dq] -.RS 2 -.IP \[bu] 2 -Read and write access to all resources, without the ability to browse -SharePoint sites. -.IP \[bu] 2 -Same as if disable_site_permission was set to true -.RE -.RE -.SS --onedrive-disable-site-permission -.PP + +- Config: access_scopes +- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES +- Type: SpaceSepList +- Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access +- Examples: + - \[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access\[dq] + - Read and write access to all resources + - \[dq]Files.Read Files.Read.All Sites.Read.All offline_access\[dq] + - Read only access to all resources + - \[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access\[dq] + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true + +#### --onedrive-disable-site-permission + Disable the request for Sites.Read.All permission. -.PP -If set to true, you will no longer be able to search for a SharePoint -site when configuring drive ID, because rclone will not request -Sites.Read.All permission. -Set it to true if your organization didn\[aq]t assign Sites.Read.All -permission to the application, and your organization disallows users to -consent app permission request on their own. -.PP + +If set to true, you will no longer be able to search for a SharePoint site when +configuring drive ID, because rclone will not request Sites.Read.All permission. +Set it to true if your organization didn\[aq]t assign Sites.Read.All permission to the +application, and your organization disallows users to consent app permission +request on their own. + Properties: -.IP \[bu] 2 -Config: disable_site_permission -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --onedrive-expose-onenote-files -.PP + +- Config: disable_site_permission +- Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION +- Type: bool +- Default: false + +#### --onedrive-expose-onenote-files + Set to make OneNote files show up in directory listings. -.PP + By default, rclone will hide OneNote files in directory listings because -operations like \[dq]Open\[dq] and \[dq]Update\[dq] won\[aq]t work on -them. -But this behaviour may also prevent you from deleting them. -If you want to delete OneNote files or otherwise want them to show up in -directory listing, set this option. -.PP +operations like \[dq]Open\[dq] and \[dq]Update\[dq] won\[aq]t work on them. But this +behaviour may also prevent you from deleting them. If you want to +delete OneNote files or otherwise want them to show up in directory +listing, set this option. + Properties: -.IP \[bu] 2 -Config: expose_onenote_files -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --onedrive-server-side-across-configs -.PP + +- Config: expose_onenote_files +- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES +- Type: bool +- Default: false + +#### --onedrive-server-side-across-configs + Deprecated: use --server-side-across-configs instead. -.PP -Allow server-side operations (e.g. -copy) to work across different onedrive configs. -.PP -This will only work if you are copying between two OneDrive -\f[I]Personal\f[R] drives AND the files to copy are already shared -between them. -In other cases, rclone will fall back to normal copy (which will be -slightly slower). -.PP + +Allow server-side operations (e.g. copy) to work across different onedrive configs. + +This will only work if you are copying between two OneDrive *Personal* drives AND +the files to copy are already shared between them. In other cases, rclone will +fall back to normal copy (which will be slightly slower). + Properties: -.IP \[bu] 2 -Config: server_side_across_configs -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --onedrive-list-chunk -.PP + +- Config: server_side_across_configs +- Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS +- Type: bool +- Default: false + +#### --onedrive-list-chunk + Size of listing chunk. -.PP + Properties: -.IP \[bu] 2 -Config: list_chunk -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_LIST_CHUNK -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 1000 -.SS --onedrive-no-versions -.PP + +- Config: list_chunk +- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK +- Type: int +- Default: 1000 + +#### --onedrive-no-versions + Remove all versions on modifying operations. -.PP + Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time. -.PP + These versions take up space out of the quota. -.PP -This flag checks for versions after file upload and setting modification -time and removes all but the last version. -.PP -\f[B]NB\f[R] Onedrive personal can\[aq]t currently delete versions so -don\[aq]t use this flag there. -.PP + +This flag checks for versions after file upload and setting +modification time and removes all but the last version. + +**NB** Onedrive personal can\[aq]t currently delete versions so don\[aq]t use +this flag there. + + Properties: -.IP \[bu] 2 -Config: no_versions -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_NO_VERSIONS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --onedrive-link-scope -.PP + +- Config: no_versions +- Env Var: RCLONE_ONEDRIVE_NO_VERSIONS +- Type: bool +- Default: false + +#### --onedrive-link-scope + Set the scope of the links created by the link command. -.PP + Properties: -.IP \[bu] 2 -Config: link_scope -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_LINK_SCOPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]anonymous\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]anonymous\[dq] -.RS 2 -.IP \[bu] 2 -Anyone with the link has access, without needing to sign in. -.IP \[bu] 2 -This may include people outside of your organization. -.IP \[bu] 2 -Anonymous link support may be disabled by an administrator. -.RE -.IP \[bu] 2 -\[dq]organization\[dq] -.RS 2 -.IP \[bu] 2 -Anyone signed into your organization (tenant) can use the link to get -access. -.IP \[bu] 2 -Only available in OneDrive for Business and SharePoint. -.RE -.RE -.SS --onedrive-link-type -.PP + +- Config: link_scope +- Env Var: RCLONE_ONEDRIVE_LINK_SCOPE +- Type: string +- Default: \[dq]anonymous\[dq] +- Examples: + - \[dq]anonymous\[dq] + - Anyone with the link has access, without needing to sign in. + - This may include people outside of your organization. + - Anonymous link support may be disabled by an administrator. + - \[dq]organization\[dq] + - Anyone signed into your organization (tenant) can use the link to get access. + - Only available in OneDrive for Business and SharePoint. + +#### --onedrive-link-type + Set the type of the links created by the link command. -.PP + Properties: -.IP \[bu] 2 -Config: link_type -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_LINK_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]view\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]view\[dq] -.RS 2 -.IP \[bu] 2 -Creates a read-only link to the item. -.RE -.IP \[bu] 2 -\[dq]edit\[dq] -.RS 2 -.IP \[bu] 2 -Creates a read-write link to the item. -.RE -.IP \[bu] 2 -\[dq]embed\[dq] -.RS 2 -.IP \[bu] 2 -Creates an embeddable link to the item. -.RE -.RE -.SS --onedrive-link-password -.PP + +- Config: link_type +- Env Var: RCLONE_ONEDRIVE_LINK_TYPE +- Type: string +- Default: \[dq]view\[dq] +- Examples: + - \[dq]view\[dq] + - Creates a read-only link to the item. + - \[dq]edit\[dq] + - Creates a read-write link to the item. + - \[dq]embed\[dq] + - Creates an embeddable link to the item. + +#### --onedrive-link-password + Set the password for links created by the link command. -.PP -At the time of writing this only works with OneDrive personal paid -accounts. -.PP + +At the time of writing this only works with OneDrive personal paid accounts. + + Properties: -.IP \[bu] 2 -Config: link_password -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --onedrive-hash-type -.PP + +- Config: link_password +- Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD +- Type: string +- Required: false + +#### --onedrive-hash-type + Specify the hash in use for the backend. -.PP -This specifies the hash type in use. -If set to \[dq]auto\[dq] it will use the default hash which is -QuickXorHash. -.PP + +This specifies the hash type in use. If set to \[dq]auto\[dq] it will use the +default hash which is QuickXorHash. + Before rclone 1.62 an SHA1 hash was used by default for Onedrive -Personal. -For 1.62 and later the default is to use a QuickXorHash for all onedrive -types. -If an SHA1 hash is desired then set this option accordingly. -.PP -From July 2023 QuickXorHash will be the only available hash for both -OneDrive for Business and OneDriver Personal. -.PP +Personal. For 1.62 and later the default is to use a QuickXorHash for +all onedrive types. If an SHA1 hash is desired then set this option +accordingly. + +From July 2023 QuickXorHash will be the only available hash for +both OneDrive for Business and OneDriver Personal. + This can be set to \[dq]none\[dq] to not use any hashes. -.PP -If the hash requested does not exist on the object, it will be returned -as an empty string which is treated as a missing hash by rclone. -.PP + +If the hash requested does not exist on the object, it will be +returned as an empty string which is treated as a missing hash by +rclone. + + Properties: -.IP \[bu] 2 -Config: hash_type -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_HASH_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]auto\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]auto\[dq] -.RS 2 -.IP \[bu] 2 -Rclone chooses the best hash -.RE -.IP \[bu] 2 -\[dq]quickxor\[dq] -.RS 2 -.IP \[bu] 2 -QuickXor -.RE -.IP \[bu] 2 -\[dq]sha1\[dq] -.RS 2 -.IP \[bu] 2 -SHA1 -.RE -.IP \[bu] 2 -\[dq]sha256\[dq] -.RS 2 -.IP \[bu] 2 -SHA256 -.RE -.IP \[bu] 2 -\[dq]crc32\[dq] -.RS 2 -.IP \[bu] 2 -CRC32 -.RE -.IP \[bu] 2 -\[dq]none\[dq] -.RS 2 -.IP \[bu] 2 -None - don\[aq]t use any hashes -.RE -.RE -.SS --onedrive-av-override -.PP + +- Config: hash_type +- Env Var: RCLONE_ONEDRIVE_HASH_TYPE +- Type: string +- Default: \[dq]auto\[dq] +- Examples: + - \[dq]auto\[dq] + - Rclone chooses the best hash + - \[dq]quickxor\[dq] + - QuickXor + - \[dq]sha1\[dq] + - SHA1 + - \[dq]sha256\[dq] + - SHA256 + - \[dq]crc32\[dq] + - CRC32 + - \[dq]none\[dq] + - None - don\[aq]t use any hashes + +#### --onedrive-av-override + Allows download of files the server thinks has a virus. -.PP + The onedrive/sharepoint server may check files uploaded with an Anti -Virus checker. -If it detects any potential viruses or malware it will block download of -the file. -.PP +Virus checker. If it detects any potential viruses or malware it will +block download of the file. + In this case you will see a message like this -.IP -.nf -\f[C] -server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: -\f[R] -.fi -.PP -If you are 100% sure you want to download this file anyway then use the ---onedrive-av-override flag, or av_override = true in the config file. -.PP + + server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden: + +If you are 100% sure you want to download this file anyway then use +the --onedrive-av-override flag, or av_override = true in the config +file. + + Properties: -.IP \[bu] 2 -Config: av_override -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_AV_OVERRIDE -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --onedrive-encoding -.PP + +- Config: av_override +- Env Var: RCLONE_ONEDRIVE_AV_OVERRIDE +- Type: bool +- Default: false + +#### --onedrive-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_ONEDRIVE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot -.SS Limitations -.PP -If you don\[aq]t use rclone for 90 days the refresh token will expire. -This will result in authorization problems. -This is easy to fix by running the -\f[C]rclone config reconnect remote:\f[R] command to get a new token and -refresh token. -.SS Naming -.PP -Note that OneDrive is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP + +- Config: encoding +- Env Var: RCLONE_ONEDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot + + + +## Limitations + +If you don\[aq]t use rclone for 90 days the refresh token will +expire. This will result in authorization problems. This is easy to +fix by running the \[ga]rclone config reconnect remote:\[ga] command to get a +new token and refresh token. + +### Naming + +Note that OneDrive is case insensitive so you can\[aq]t have a +file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + There are quite a few characters that can\[aq]t be in OneDrive file -names. -These can\[aq]t occur on Windows platforms, but on non-Windows platforms -they are common. -Rclone will map these names to and from an identical looking unicode -equivalent. -For example if a file has a \f[C]?\f[R] in it will be mapped to -\f[C]\[uFF1F]\f[R] instead. -.SS File sizes -.PP -The largest allowed file size is 250 GiB for both OneDrive Personal and -OneDrive for Business (Updated 13 Jan -2021) (https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). -.SS Path length -.PP -The entire path, including the file name, must contain fewer than 400 -characters for OneDrive, OneDrive for Business and SharePoint Online. -If you are encrypting file and folder names with rclone, you may want to -pay attention to this limitation because the encrypted names are -typically longer than the original ones. -.SS Number of files -.PP +names. These can\[aq]t occur on Windows platforms, but on non-Windows +platforms they are common. Rclone will map these names to and from an +identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] +in it will be mapped to \[ga]\[uFF1F]\[ga] instead. + +### File sizes + +The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). + +### Path length + +The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. + +### Number of files + OneDrive seems to be OK with at least 50,000 files in a folder, but at -100,000 rclone will get errors listing the directory like -\f[C]couldn\[cq]t list files: UnknownError:\f[R]. -See #2707 (https://github.com/rclone/rclone/issues/2707) for more info. -.PP -An official document about the limitations for different types of -OneDrive can be found -here (https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). -.SS Versions -.PP +100,000 rclone will get errors listing the directory like \[ga]couldn\[cq]t +list files: UnknownError:\[ga]. See +[#2707](https://github.com/rclone/rclone/issues/2707) for more info. + +An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). + +## Versions + Every change in a file OneDrive causes the service to create a new -version of the file. -This counts against a users quota. -For example changing the modification time of a file creates a second +version of the file. This counts against a users quota. For +example changing the modification time of a file creates a second version, so the file apparently uses twice the space. -.PP -For example the \f[C]copy\f[R] command is affected by this as rclone -copies the file and then afterwards sets the modification time to match -the source file which uses another version. -.PP -You can use the \f[C]rclone cleanup\f[R] command (see below) to remove -all old versions. -.PP -Or you can set the \f[C]no_versions\f[R] parameter to \f[C]true\f[R] and -rclone will remove versions after operations which create new versions. -This takes extra transactions so only enable it if you need it. -.PP -\f[B]Note\f[R] At the time of writing Onedrive Personal creates versions + +For example the \[ga]copy\[ga] command is affected by this as rclone copies +the file and then afterwards sets the modification time to match the +source file which uses another version. + +You can use the \[ga]rclone cleanup\[ga] command (see below) to remove all old +versions. + +Or you can set the \[ga]no_versions\[ga] parameter to \[ga]true\[ga] and rclone will +remove versions after operations which create new versions. This takes +extra transactions so only enable it if you need it. + +**Note** At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing -them returns \[dq]API not found\[dq] so cleanup and -\f[C]no_versions\f[R] should not be used on Onedrive Personal. -.SS Disabling versioning -.PP -Starting October 2018, users will no longer be able to disable -versioning by default. -This is because Microsoft has brought an -update (https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) -to the mechanism. -To change this new default setting, a PowerShell command is required to -be run by a SharePoint admin. -If you are an admin, you can run these commands in PowerShell to change -that setting: -.IP "1." 3 -\f[C]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\f[R] -(in case you haven\[aq]t installed this already) -.IP "2." 3 -\f[C]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\f[R] -.IP "3." 3 -\f[C]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\f[R] -(replacing \f[C]YOURSITE\f[R], \f[C]YOU\f[R], \f[C]YOURSITE.COM\f[R] -with the actual values; this will prompt for your credentials) -.IP "4." 3 -\f[C]Set-SPOTenant -EnableMinimumVersionRequirement $False\f[R] -.IP "5." 3 -\f[C]Disconnect-SPOService\f[R] (to disconnect from the server) -.PP -\f[I]Below are the steps for normal users to disable versioning. If you -don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above -requirements are met.\f[R] -.PP -User Weropol (https://github.com/Weropol) has found a method to disable +them returns \[dq]API not found\[dq] so cleanup and \[ga]no_versions\[ga] should not +be used on Onedrive Personal. + +### Disabling versioning + +Starting October 2018, users will no longer be able to +disable versioning by default. This is because Microsoft has brought +an +[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) +to the mechanism. To change this new default setting, a PowerShell +command is required to be run by a SharePoint admin. If you are an +admin, you can run these commands in PowerShell to change that +setting: + +1. \[ga]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\[ga] (in case you haven\[aq]t installed this already) +2. \[ga]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\[ga] +3. \[ga]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\[ga] (replacing \[ga]YOURSITE\[ga], \[ga]YOU\[ga], \[ga]YOURSITE.COM\[ga] with the actual values; this will prompt for your credentials) +4. \[ga]Set-SPOTenant -EnableMinimumVersionRequirement $False\[ga] +5. \[ga]Disconnect-SPOService\[ga] (to disconnect from the server) + +*Below are the steps for normal users to disable versioning. If you don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above requirements are met.* + +User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive -.IP "1." 3 -Open the settings menu by clicking on the gear symbol at the top of the -OneDrive Business page. -.IP "2." 3 -Click Site settings. -.IP "3." 3 -Once on the Site settings page, navigate to Site Administration > Site -libraries and lists. -.IP "4." 3 -Click Customize \[dq]Documents\[dq]. -.IP "5." 3 -Click General Settings > Versioning Settings. -.IP "6." 3 -Under Document Version History select the option No versioning. -Note: This will disable the creation of new file versions, but will not -remove any previous versions. -Your documents are safe. -.IP "7." 3 -Apply the changes by clicking OK. -.IP "8." 3 -Use rclone to upload or modify files. -(I also use the --no-update-modtime flag) -.IP "9." 3 -Restore the versioning settings after using rclone. -(Optional) -.SS Cleanup -.PP -OneDrive supports \f[C]rclone cleanup\f[R] which causes rclone to look -through every file under the path supplied and delete all version but -the current version. -Because this involves traversing all the files, then querying each file -for versions it can be quite slow. -Rclone does \f[C]--checkers\f[R] tests in parallel. -The command also supports \f[C]--interactive\f[R]/\f[C]i\f[R] or -\f[C]--dry-run\f[R] which is a great way to see what it would do. -.IP -.nf -\f[C] -rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir -rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir + +1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. +2. Click Site settings. +3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. +4. Click Customize \[dq]Documents\[dq]. +5. Click General Settings > Versioning Settings. +6. Under Document Version History select the option No versioning. +Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. +7. Apply the changes by clicking OK. +8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) +9. Restore the versioning settings after using rclone. (Optional) + +## Cleanup + +OneDrive supports \[ga]rclone cleanup\[ga] which causes rclone to look through +every file under the path supplied and delete all version but the +current version. Because this involves traversing all the files, then +querying each file for versions it can be quite slow. Rclone does +\[ga]--checkers\[ga] tests in parallel. The command also supports \[ga]--interactive\[ga]/\[ga]i\[ga] +or \[ga]--dry-run\[ga] which is a great way to see what it would do. + + rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir + rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir + +**NB** Onedrive personal can\[aq]t currently delete versions + +## Troubleshooting ## + +### Excessive throttling or blocked on SharePoint + +If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: \[ga]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\[ga] + +The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) + +### Unexpected file size/hash differences on Sharepoint #### + +It is a +[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) +issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies +uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and +hash checks to fail. There are also other situations that will cause OneDrive to +report inconsistent file sizes. To use rclone with such +affected files on Sharepoint, you +may disable these checks with the following command line arguments: \f[R] .fi .PP -\f[B]NB\f[R] Onedrive personal can\[aq]t currently delete versions -.SS Troubleshooting -.SS Excessive throttling or blocked on SharePoint -.PP -If you experience excessive throttling or is being blocked on SharePoint -then it may help to set the user agent explicitly with a flag like this: -\f[C]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\f[R] -.PP -The specific details can be found in the Microsoft document: Avoid -getting throttled or blocked in SharePoint -Online (https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) -.SS Unexpected file size/hash differences on Sharepoint -.PP -It is a -known (https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) -issue that Sharepoint (not OneDrive or OneDrive for Business) silently -modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), -causing file size and hash checks to fail. -There are also other situations that will cause OneDrive to report -inconsistent file sizes. -To use rclone with such affected files on Sharepoint, you may disable -these checks with the following command line arguments: -.IP -.nf -\f[C] --ignore-checksum --ignore-size -\f[R] -.fi -.PP -Alternatively, if you have write access to the OneDrive files, it may be -possible to fix this problem for certain files, by attempting the steps -below. -Open the web interface for OneDrive (https://onedrive.live.com) and find -the affected files (which will be in the error messages/log for rclone). -Simply click on each of these files, causing OneDrive to open them on -the web. -This will cause each file to be converted in place to a format that is -functionally equivalent but which will no longer trigger the size -discrepancy. -Once all problematic files are converted you will no longer need the -ignore options above. -.SS Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] -.PP -It is a -known (https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue -that Sharepoint (not OneDrive or OneDrive for Business) may return -\[dq]item not found\[dq] errors when users try to replace or delete -uploaded files; this seems to mainly affect Office files (.docx, .xlsx, -etc.) and web files (.html, .aspx, etc.). -As a workaround, you may use the \f[C]--backup-dir \f[R] -command line argument so rclone moves the files to be replaced/deleted -into a given backup directory (instead of directly replacing/deleting -them). -For example, to instruct rclone to move the files into the directory -\f[C]rclone-backup-dir\f[R] on backend \f[C]mysharepoint\f[R], you may -use: .IP .nf \f[C] +Alternatively, if you have write access to the OneDrive files, it may be possible +to fix this problem for certain files, by attempting the steps below. +Open the web interface for [OneDrive](https://onedrive.live.com) and find the +affected files (which will be in the error messages/log for rclone). Simply click on +each of these files, causing OneDrive to open them on the web. This will cause each +file to be converted in place to a format that is functionally equivalent +but which will no longer trigger the size discrepancy. Once all problematic files +are converted you will no longer need the ignore options above. + +### Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] #### + +It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue +that Sharepoint (not OneDrive or OneDrive for Business) may return \[dq]item not +found\[dq] errors when users try to replace or delete uploaded files; this seems to +mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use +the \[ga]--backup-dir \[ga] command line argument so rclone moves the +files to be replaced/deleted into a given backup directory (instead of directly +replacing/deleting them). For example, to instruct rclone to move the files into +the directory \[ga]rclone-backup-dir\[ga] on backend \[ga]mysharepoint\[ga], you may use: +\f[R] +.fi +.PP --backup-dir mysharepoint:rclone-backup-dir -\f[R] -.fi -.SS access_denied (AADSTS65005) .IP .nf \f[C] -Error: access_denied -Code: AADSTS65005 -Description: Using application \[aq]rclone\[aq] is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. +### access\[rs]_denied (AADSTS65005) #### \f[R] .fi .PP -This means that rclone can\[aq]t use the OneDrive for Business API with -your account. -You can\[aq]t do much about it, maybe write an email to your admins. -.PP -However, there are other ways to interact with your OneDrive account. -Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint -.SS invalid_grant (AADSTS50076) +Error: access_denied Code: AADSTS65005 Description: Using application +\[aq]rclone\[aq] is currently not supported for your organization +[YOUR_ORGANIZATION] because it is in an unmanaged state. +An administrator needs to claim ownership of the company by DNS +validation of [YOUR_ORGANIZATION] before the application rclone can be +provisioned. .IP .nf \f[C] -Error: invalid_grant -Code: AADSTS50076 -Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access \[aq]...\[aq]. +This means that rclone can\[aq]t use the OneDrive for Business API with your account. You can\[aq]t do much about it, maybe write an email to your admins. + +However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint + +### invalid\[rs]_grant (AADSTS50076) #### \f[R] .fi .PP -If you see the error above after enabling multi-factor authentication -for your account, you can fix it by refreshing your OAuth refresh token. -To do that, run \f[C]rclone config\f[R], and choose to edit your -OneDrive backend. -Then, you don\[aq]t need to actually make any changes until you reach -this question: \f[C]Already have a token - refresh?\f[R]. -For this question, answer \f[C]y\f[R] and go through the process to -refresh your token, just like the first time the backend is configured. -After this, rclone should work again for this backend. -.SS Invalid request when making public links -.PP -On Sharepoint and OneDrive for Business, \f[C]rclone link\f[R] may -return an \[dq]Invalid request\[dq] error. -A possible cause is that the organisation admin didn\[aq]t allow public -links to be made for the organisation/sharepoint library. -To fix the permissions as an admin, take a look at the docs: -1 (https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), -2 (https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). -.SS Can not access \f[C]Shared\f[R] with me files -.PP -Shared with me files is not supported by rclone -currently (https://github.com/rclone/rclone/issues/4062), but there is a -workaround: -.IP "1." 3 -Visit https://onedrive.live.com (https://onedrive.live.com/) -.IP "2." 3 -Right click a item in \f[C]Shared\f[R], then click -\f[C]Add shortcut to My files\f[R] in the context -[IMAGE: make_shortcut (https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png)] -.IP "3." 3 -The shortcut will appear in \f[C]My files\f[R], you can access it with -rclone, it behaves like a normal folder/file. -[IMAGE: in_my_files (https://i.imgur.com/0S8H3li.png)] -[IMAGE: rclone_mount (https://i.imgur.com/2Iq66sW.png)] -.SS Live Photos uploaded from iOS (small video clips in .heic files) -.PP -The iOS OneDrive app introduced upload and -storage (https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) -of Live Photos (https://support.apple.com/en-gb/HT207310) in 2020. -The usage and download of these uploaded Live Photos is unfortunately -still work-in-progress and this introduces several issues when copying, -synchronising and mounting \[en] both in rclone and in the native -OneDrive client on Windows. -.PP -The root cause can easily be seen if you locate one of your Live Photos -in the OneDrive web interface. -Then download the photo from the web interface. -You will then see that the size of downloaded .heic file is smaller than -the size displayed in the web interface. -The downloaded file is smaller because it only contains a single frame -(still photo) extracted from the Live Photo (movie) stored in OneDrive. -.PP -The different sizes will cause \f[C]rclone copy/sync\f[R] to repeatedly -recopy unmodified photos something like this: +Error: invalid_grant Code: AADSTS50076 Description: Due to a +configuration change made by your administrator, or because you moved to +a new location, you must use multi-factor authentication to access +\[aq]...\[aq]. .IP .nf \f[C] -DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) -DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK -INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) -\f[R] -.fi -.PP -These recopies can be worked around by adding \f[C]--ignore-size\f[R]. -Please note that this workaround only syncs the still-picture not the -movie clip, and relies on modification dates being correctly updated on -all files in all situations. -.PP -The different sizes will also cause \f[C]rclone check\f[R] to report -size errors something like this: -.IP -.nf -\f[C] -ERROR : 20230203_123826234_iOS.heic: sizes differ -\f[R] -.fi -.PP -These check errors can be suppressed by adding \f[C]--ignore-size\f[R]. -.PP -The different sizes will also cause \f[C]rclone mount\f[R] to fail -downloading with an error something like this: -.IP -.nf -\f[C] -ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF -\f[R] -.fi -.PP -or like this when using \f[C]--cache-mode=full\f[R]: -.IP -.nf -\f[C] -INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: -ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: -\f[R] -.fi -.SH OpenDrive -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run \[ga]rclone config\[ga], and choose to edit your OneDrive backend. Then, you don\[aq]t need to actually make any changes until you reach this question: \[ga]Already have a token - refresh?\[ga]. For this question, answer \[ga]y\[ga] and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. + +### Invalid request when making public links #### + +On Sharepoint and OneDrive for Business, \[ga]rclone link\[ga] may return an \[dq]Invalid +request\[dq] error. A possible cause is that the organisation admin didn\[aq]t allow +public links to be made for the organisation/sharepoint library. To fix the +permissions as an admin, take a look at the docs: +[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), +[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). + +### Can not access \[ga]Shared\[ga] with me files + +Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: + +1. Visit [https://onedrive.live.com](https://onedrive.live.com/) +2. Right click a item in \[ga]Shared\[ga], then click \[ga]Add shortcut to My files\[ga] in the context + ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png \[dq]Screenshot (Shared with me)\[dq]) +3. The shortcut will appear in \[ga]My files\[ga], you can access it with rclone, it behaves like a normal folder/file. + ![in_my_files](https://i.imgur.com/0S8H3li.png \[dq]Screenshot (My Files)\[dq]) + ![rclone_mount](https://i.imgur.com/2Iq66sW.png \[dq]Screenshot (rclone mount)\[dq]) + +### Live Photos uploaded from iOS (small video clips in .heic files) + +The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) +of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. +The usage and download of these uploaded Live Photos is unfortunately still work-in-progress +and this introduces several issues when copying, synchronising and mounting \[en] both in rclone and in the native OneDrive client on Windows. + +The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. +Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. +The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. + +The different sizes will cause \[ga]rclone copy/sync\[ga] to repeatedly recopy unmodified photos something like this: + + DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) + DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK + INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) + +These recopies can be worked around by adding \[ga]--ignore-size\[ga]. Please note that this workaround only syncs the still-picture not the movie clip, +and relies on modification dates being correctly updated on all files in all situations. + +The different sizes will also cause \[ga]rclone check\[ga] to report size errors something like this: + + ERROR : 20230203_123826234_iOS.heic: sizes differ + +These check errors can be suppressed by adding \[ga]--ignore-size\[ga]. + +The different sizes will also cause \[ga]rclone mount\[ga] to fail downloading with an error something like this: + + ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF + +or like this when using \[ga]--cache-mode=full\[ga]: + + INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: + ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: + +# OpenDrive + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: +\f[R] +.fi +.IP "n)" 3 +New remote +.IP "o)" 3 +Delete remote +.IP "p)" 3 +Quit config e/n/d/q> n name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +OpenDrive \ \[dq]opendrive\[dq] [snip] Storage> opendrive Username +username> Password +.IP "q)" 3 +Yes type in my own password +.IP "r)" 3 +Generate random password y/g> y Enter the password: password: Confirm +the password: password: -------------------- [remote] username = +password = *** ENCRYPTED *** -------------------- +.IP "s)" 3 +Yes this is OK +.IP "t)" 3 +Edit this remote +.IP "u)" 3 +Delete this remote y/e/d> y .IP .nf \f[C] -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / OpenDrive - \[rs] \[dq]opendrive\[dq] -[snip] -Storage> opendrive -Username -username> -Password -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: --------------------- -[remote] -username = -password = *** ENCRYPTED *** --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP List directories in top level of your OpenDrive -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your OpenDrive -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to an OpenDrive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modified time and MD5SUMs -.PP + + rclone copy /home/source remote:backup + +### Modified time and MD5SUMs + OpenDrive allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -T{ -* -T}@T{ -0x2A -T}@T{ -\[uFF0A] -T} -T{ -: -T}@T{ -0x3A -T}@T{ -\[uFF1A] -T} -T{ -< -T}@T{ -0x3C -T}@T{ -\[uFF1C] -T} -T{ -> -T}@T{ -0x3E -T}@T{ -\[uFF1E] -T} -T{ -? -T}@T{ -0x3F -T}@T{ -\[uFF1F] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -T{ -| -T}@T{ -0x7C -T}@T{ -\[uFF5C] -T} -.TE -.PP +second. These will be used to detect whether objects need syncing or +not. + +### Restricted filename characters + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| NUL | 0x00 | \[u2400] | +| / | 0x2F | \[uFF0F] | +| \[dq] | 0x22 | \[uFF02] | +| * | 0x2A | \[uFF0A] | +| : | 0x3A | \[uFF1A] | +| < | 0x3C | \[uFF1C] | +| > | 0x3E | \[uFF1E] | +| ? | 0x3F | \[uFF1F] | +| \[rs] | 0x5C | \[uFF3C] | +| \[rs]| | 0x7C | \[uFF5C] | + File names can also not begin or end with the following characters. -These only get replaced if they are the first or last character in the -name: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -SP -T}@T{ -0x20 -T}@T{ -\[u2420] -T} -T{ -HT -T}@T{ -0x09 -T}@T{ -\[u2409] -T} -T{ -LF -T}@T{ -0x0A -T}@T{ -\[u240A] -T} -T{ -VT -T}@T{ -0x0B -T}@T{ -\[u240B] -T} -T{ -CR -T}@T{ -0x0D -T}@T{ -\[u240D] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP +These only get replaced if they are the first or last character in the name: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| SP | 0x20 | \[u2420] | +| HT | 0x09 | \[u2409] | +| LF | 0x0A | \[u240A] | +| VT | 0x0B | \[u240B] | +| CR | 0x0D | \[u240D] | + + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + Here are the Standard options specific to opendrive (OpenDrive). -.SS --opendrive-username -.PP + +#### --opendrive-username + Username. -.PP + Properties: -.IP \[bu] 2 -Config: username -.IP \[bu] 2 -Env Var: RCLONE_OPENDRIVE_USERNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --opendrive-password -.PP + +- Config: username +- Env Var: RCLONE_OPENDRIVE_USERNAME +- Type: string +- Required: true + +#### --opendrive-password + Password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_OPENDRIVE_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP + +- Config: password +- Env Var: RCLONE_OPENDRIVE_PASSWORD +- Type: string +- Required: true + +### Advanced options + Here are the Advanced options specific to opendrive (OpenDrive). -.SS --opendrive-encoding -.PP + +#### --opendrive-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_OPENDRIVE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot -.SS --opendrive-chunk-size -.PP + +- Config: encoding +- Env Var: RCLONE_OPENDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot + +#### --opendrive-chunk-size + Files will be uploaded in chunks this size. -.PP + Note that these chunks are buffered in memory so increasing them will increase memory use. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Mi -.SS Limitations -.PP -Note that OpenDrive is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP + +- Config: chunk_size +- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 10Mi + + + +## Limitations + +Note that OpenDrive is case insensitive so you can\[aq]t have a +file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + There are quite a few characters that can\[aq]t be in OpenDrive file -names. -These can\[aq]t occur on Windows platforms, but on non-Windows platforms -they are common. -Rclone will map these names to and from an identical looking unicode -equivalent. -For example if a file has a \f[C]?\f[R] in it will be mapped to -\f[C]\[uFF1F]\f[R] instead. -.PP -\f[C]rclone about\f[R] is not supported by the OpenDrive backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Oracle Object Storage -.PP -Oracle Object Storage -Overview (https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) -.PP -Oracle Object Storage -FAQ (https://www.oracle.com/cloud/storage/object-storage/faq/) -.PP -Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for -the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:bucket/path/to/dir\f[R]. -.SS Configuration -.PP -Here is an example of making an oracle object storage configuration. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config +names. These can\[aq]t occur on Windows platforms, but on non-Windows +platforms they are common. Rclone will map these names to and from an +identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] +in it will be mapped to \[ga]\[uFF1F]\[ga] instead. + +\[ga]rclone about\[ga] is not supported by the OpenDrive backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Oracle Object Storage +- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) +- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) +- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) + +Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] command.) You may put subdirectories in +too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. + +Sample command to transfer local artifacts to remote:bucket in oracle object storage: + +\[ga]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\[ga] + +## Configuration + +Here is an example of making an oracle object storage configuration. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: + \f[R] .fi +.IP "n)" 3 +New remote +.IP "o)" 3 +Delete remote +.IP "p)" 3 +Rename remote +.IP "q)" 3 +Copy remote +.IP "r)" 3 +Set configuration password +.IP "s)" 3 +Quit config e/n/d/r/c/s/q> n .PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> n - Enter name for new remote. name> remote - +.PP Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -[snip] -XX / Oracle Cloud Infrastructure Object Storage - \[rs] (oracleobjectstorage) -Storage> oracleobjectstorage - +[snip] XX / Oracle Cloud Infrastructure Object Storage +\ (oracleobjectstorage) Storage> oracleobjectstorage +.PP Option provider. -Choose your Auth Provider -Choose a number from below, or type in your own string value. +Choose your Auth Provider Choose a number from below, or type in your +own string value. Press Enter for the default (env_auth). - 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins - \[rs] (env_auth) - / use an OCI user and an API key for authentication. - 2 | you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - \[rs] (user_principal_auth) - / use instance principals to authorize an instance to make API calls. - 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - \[rs] (instance_principal_auth) - 4 / use resource principals to make API calls - \[rs] (resource_principal_auth) - 5 / no credentials needed, this is typically for reading public buckets - \[rs] (no_auth) -provider> 2 - +1 / automatically pickup the credentials from runtime(env), first one to +provide auth wins \ (env_auth) / use an OCI user and an API key for +authentication. +2 | you\[cq]ll need to put in a config file your tenancy OCID, user +OCID, region, the path, fingerprint to an API key. +| https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm +\ (user_principal_auth) / use instance principals to authorize an +instance to make API calls. +3 | each instance has its own identity, and authenticates using the +certificates that are read from instance metadata. +| +https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm +\ (instance_principal_auth) 4 / use resource principals to make API +calls \ (resource_principal_auth) 5 / no credentials needed, this is +typically for reading public buckets \ (no_auth) provider> 2 +.PP Option namespace. -Object storage namespace -Enter a value. +Object storage namespace Enter a value. namespace> idbamagbg734 - +.PP Option compartment. -Object storage compartment OCID -Enter a value. -compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - +Object storage compartment OCID Enter a value. +compartment> +ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba +.PP Option region. -Object storage Region -Enter a value. +Object storage Region Enter a value. region> us-ashburn-1 - +.PP Option endpoint. Endpoint for Object storage API. Leave blank to use the default endpoint for the region. -Enter a value. Press Enter to leave empty. -endpoint> - +Enter a value. +Press Enter to leave empty. +endpoint> +.PP Option config_file. -Full Path to OCI config file -Choose a number from below, or type in your own string value. +Full Path to OCI config file Choose a number from below, or type in your +own string value. Press Enter for the default (\[ti]/.oci/config). - 1 / oci configuration file location - \[rs] (\[ti]/.oci/config) -config_file> /etc/oci/dev.conf - +1 / oci configuration file location \ (\[ti]/.oci/config) config_file> +/etc/oci/dev.conf +.PP Option config_profile. -Profile name inside OCI config file -Choose a number from below, or type in your own string value. +Profile name inside OCI config file Choose a number from below, or type +in your own string value. Press Enter for the default (Default). - 1 / Use the default profile - \[rs] (Default) -config_profile> Test - +1 / Use the default profile \ (Default) config_profile> Test +.PP Edit advanced config? -y) Yes -n) No (default) -y/n> n - +y) Yes n) No (default) y/n> n +.PP Configuration complete. -Options: -- type: oracleobjectstorage -- namespace: idbamagbg734 -- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -- region: us-ashburn-1 -- provider: user_principal_auth -- config_file: /etc/oci/dev.conf -- config_profile: Test -Keep this \[dq]remote\[dq] remote? -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote +Options: - type: oracleobjectstorage - namespace: idbamagbg734 - +compartment: +ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba +- region: us-ashburn-1 - provider: user_principal_auth - config_file: +/etc/oci/dev.conf - config_profile: Test Keep this \[dq]remote\[dq] +remote? +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -\f[R] -.fi -.PP +.IP +.nf +\f[C] See all buckets -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + Create a new bucket -.IP -.nf -\f[C] -rclone mkdir remote:bucket -\f[R] -.fi -.PP + + rclone mkdir remote:bucket + List the contents of a bucket -.IP -.nf -\f[C] -rclone ls remote:bucket -rclone ls remote:bucket --max-depth 1 -\f[R] -.fi -.SS OCI Authentication Provider -.PP -OCI has various authentication methods. -To learn more about authentication methods please refer oci -authentication -methods (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) + + rclone ls remote:bucket + rclone ls remote:bucket --max-depth 1 + +## Authentication Providers + +OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication +methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) These choices can be specified in the rclone config file. -.PP + Rclone supports the following OCI authentication provider. -.IP -.nf -\f[C] -User Principal -Instance Principal -Resource Principal -No authentication -\f[R] -.fi -.SS Authentication provider choice: User Principal -.PP + + User Principal + Instance Principal + Resource Principal + No authentication + +### User Principal Sample rclone config file for Authentication Provider User Principal: -.IP -.nf -\f[C] -[oos] -type = oracleobjectstorage -namespace = id34 -compartment = ocid1.compartment.oc1..aaba -region = us-ashburn-1 -provider = user_principal_auth -config_file = /home/opc/.oci/config -config_profile = Default -\f[R] -.fi -.PP -Advantages: - One can use this method from any server within OCI or -on-premises or from other cloud provider. -.PP -Considerations: - you need to configure user\[cq]s privileges / policy -to allow access to object storage - Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may -cause automation regressions that use the user\[aq]s credentials. -.SS Authentication provider choice: Instance Principal -.PP -An OCI compute instance can be authorized to use rclone by using -it\[aq]s identity and certificates as an instance principal. -With this approach no credentials have to be stored and managed. -.PP -Sample rclone configuration file for Authentication Provider Instance -Principal: -.IP -.nf -\f[C] -[opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf -[oos] -type = oracleobjectstorage -namespace = idfn -compartment = ocid1.compartment.oc1..aak7a -region = us-ashburn-1 -provider = instance_principal_auth -\f[R] -.fi -.PP + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = user_principal_auth + config_file = /home/opc/.oci/config + config_profile = Default + Advantages: -.IP \[bu] 2 -With instance principals, you don\[aq]t need to configure user -credentials and transfer/ save it to disk in your compute instances or -rotate the credentials. -.IP \[bu] 2 -You don\[cq]t need to deal with users and keys. -.IP \[bu] 2 -Greatly helps in automation as you don\[aq]t have to manage access keys, -user private keys, storing them in vault, using kms etc. -.PP +- One can use this method from any server within OCI or on-premises or from other cloud provider. + Considerations: -.IP \[bu] 2 -You need to configure a dynamic group having this instance as member and -add policy to read object storage to that dynamic group. -.IP \[bu] 2 -Everyone who has access to this machine can execute the CLI commands. -.IP \[bu] 2 -It is applicable for oci compute instances only. -It cannot be used on external instance or resources. -.SS Authentication provider choice: Resource Principal -.PP -Resource principal auth is very similar to instance principal auth but -used for resources that are not compute instances such as serverless -functions (https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). -To use resource principal ensure Rclone process is started with these -environment variables set in its process. -.IP -.nf -\f[C] -export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 -export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 -export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem -export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token -\f[R] -.fi -.PP -Sample rclone configuration file for Authentication Provider Resource -Principal: -.IP -.nf -\f[C] -[oos] -type = oracleobjectstorage -namespace = id34 -compartment = ocid1.compartment.oc1..aaba -region = us-ashburn-1 -provider = resource_principal_auth -\f[R] -.fi -.SS Authentication provider choice: No authentication -.PP -Public buckets do not require any authentication mechanism to read -objects. +- you need to configure user\[cq]s privileges / policy to allow access to object storage +- Overhead of managing users and keys. +- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user\[aq]s credentials. + +### Instance Principal +An OCI compute instance can be authorized to use rclone by using it\[aq]s identity and certificates as an instance principal. +With this approach no credentials have to be stored and managed. + +Sample rclone configuration file for Authentication Provider Instance Principal: + + [opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf + [oos] + type = oracleobjectstorage + namespace = idfn + compartment = ocid1.compartment.oc1..aak7a + region = us-ashburn-1 + provider = instance_principal_auth + +Advantages: + +- With instance principals, you don\[aq]t need to configure user credentials and transfer/ save it to disk in your compute + instances or rotate the credentials. +- You don\[cq]t need to deal with users and keys. +- Greatly helps in automation as you don\[aq]t have to manage access keys, user private keys, storing them in vault, + using kms etc. + +Considerations: + +- You need to configure a dynamic group having this instance as member and add policy to read object storage to that + dynamic group. +- Everyone who has access to this machine can execute the CLI commands. +- It is applicable for oci compute instances only. It cannot be used on external instance or resources. + +### Resource Principal +Resource principal auth is very similar to instance principal auth but used for resources that are not +compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). +To use resource principal ensure Rclone process is started with these environment variables set in its process. + + export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 + export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 + export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem + export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token + +Sample rclone configuration file for Authentication Provider Resource Principal: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = resource_principal_auth + +### No authentication +Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication: -.IP -.nf -\f[C] -[oos] -type = oracleobjectstorage -namespace = id34 -compartment = ocid1.compartment.oc1..aaba -region = us-ashburn-1 -provider = no_auth -\f[R] -.fi -.SS Options -.SS Modified time -.PP + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = no_auth + +## Options +### Modified time + The modified time is stored as metadata on the object as -\f[C]opc-meta-mtime\f[R] as floating point since the epoch, accurate to -1 ns. -.PP -If the modification time needs to be updated rclone will attempt to -perform a server side copy to update the modification if the object can -be copied in a single part. -In the case the object is larger than 5Gb, the object will be uploaded -rather than copied. -.PP -Note that reading this from the object takes an additional -\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object -listings. -.SS Multipart uploads -.PP +\[ga]opc-meta-mtime\[ga] as floating point since the epoch, accurate to 1 ns. + +If the modification time needs to be updated rclone will attempt to perform a server +side copy to update the modification if the object can be copied in a single part. +In the case the object is larger than 5Gb, the object will be uploaded rather than copied. + +Note that reading this from the object takes an additional \[ga]HEAD\[ga] request as the metadata +isn\[aq]t returned in object listings. + +### Multipart uploads + rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB. -.PP -Note that files uploaded \f[I]both\f[R] with multipart upload -\f[I]and\f[R] through crypt remotes do not have MD5 sums. -.PP + +Note that files uploaded *both* with multipart upload *and* through +crypt remotes do not have MD5 sums. + rclone switches from single part uploads to multipart uploads at the -point specified by \f[C]--oos-upload-cutoff\f[R]. -This can be a maximum of 5 GiB and a minimum of 0 (ie always upload -multipart files). -.PP +point specified by \[ga]--oos-upload-cutoff\[ga]. This can be a maximum of 5 GiB +and a minimum of 0 (ie always upload multipart files). + The chunk sizes used in the multipart upload are specified by -\f[C]--oos-chunk-size\f[R] and the number of chunks uploaded -concurrently is specified by \f[C]--oos-upload-concurrency\f[R]. -.PP -Multipart uploads will use \f[C]--transfers\f[R] * -\f[C]--oos-upload-concurrency\f[R] * \f[C]--oos-chunk-size\f[R] extra +\[ga]--oos-chunk-size\[ga] and the number of chunks uploaded concurrently is +specified by \[ga]--oos-upload-concurrency\[ga]. + +Multipart uploads will use \[ga]--transfers\[ga] * \[ga]--oos-upload-concurrency\[ga] * +\[ga]--oos-chunk-size\[ga] extra memory. Single part uploads to not use extra memory. -Single part uploads to not use extra memory. -.PP + Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster. -.PP -Increasing \f[C]--oos-upload-concurrency\f[R] will increase throughput -(8 would be a sensible value) and increasing \f[C]--oos-chunk-size\f[R] -also increases throughput (16M would be sensible). -Increasing either of these will use more memory. -The default values are high enough to gain most of the possible -performance without using too much memory. -.SS Standard options -.PP -Here are the Standard options specific to oracleobjectstorage (Oracle -Cloud Infrastructure Object Storage). -.SS --oos-provider -.PP + +Increasing \[ga]--oos-upload-concurrency\[ga] will increase throughput (8 would +be a sensible value) and increasing \[ga]--oos-chunk-size\[ga] also increases +throughput (16M would be sensible). Increasing either of these will +use more memory. The default values are high enough to gain most of +the possible performance without using too much memory. + + +### Standard options + +Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). + +#### --oos-provider + Choose your Auth Provider -.PP + Properties: -.IP \[bu] 2 -Config: provider -.IP \[bu] 2 -Env Var: RCLONE_OOS_PROVIDER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]env_auth\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]env_auth\[dq] -.RS 2 -.IP \[bu] 2 -automatically pickup the credentials from runtime(env), first one to -provide auth wins -.RE -.IP \[bu] 2 -\[dq]user_principal_auth\[dq] -.RS 2 -.IP \[bu] 2 -use an OCI user and an API key for authentication. -.IP \[bu] 2 -you\[cq]ll need to put in a config file your tenancy OCID, user OCID, -region, the path, fingerprint to an API key. -.IP \[bu] 2 -https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm -.RE -.IP \[bu] 2 -\[dq]instance_principal_auth\[dq] -.RS 2 -.IP \[bu] 2 -use instance principals to authorize an instance to make API calls. -.IP \[bu] 2 -each instance has its own identity, and authenticates using the -certificates that are read from instance metadata. -.IP \[bu] 2 -https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm -.RE -.IP \[bu] 2 -\[dq]resource_principal_auth\[dq] -.RS 2 -.IP \[bu] 2 -use resource principals to make API calls -.RE -.IP \[bu] 2 -\[dq]no_auth\[dq] -.RS 2 -.IP \[bu] 2 -no credentials needed, this is typically for reading public buckets -.RE -.RE -.SS --oos-namespace -.PP + +- Config: provider +- Env Var: RCLONE_OOS_PROVIDER +- Type: string +- Default: \[dq]env_auth\[dq] +- Examples: + - \[dq]env_auth\[dq] + - automatically pickup the credentials from runtime(env), first one to provide auth wins + - \[dq]user_principal_auth\[dq] + - use an OCI user and an API key for authentication. + - you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + - \[dq]instance_principal_auth\[dq] + - use instance principals to authorize an instance to make API calls. + - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - \[dq]resource_principal_auth\[dq] + - use resource principals to make API calls + - \[dq]no_auth\[dq] + - no credentials needed, this is typically for reading public buckets + +#### --oos-namespace + Object storage namespace -.PP + Properties: -.IP \[bu] 2 -Config: namespace -.IP \[bu] 2 -Env Var: RCLONE_OOS_NAMESPACE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --oos-compartment -.PP + +- Config: namespace +- Env Var: RCLONE_OOS_NAMESPACE +- Type: string +- Required: true + +#### --oos-compartment + Object storage compartment OCID -.PP + Properties: -.IP \[bu] 2 -Config: compartment -.IP \[bu] 2 -Env Var: RCLONE_OOS_COMPARTMENT -.IP \[bu] 2 -Provider: !no_auth -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --oos-region -.PP + +- Config: compartment +- Env Var: RCLONE_OOS_COMPARTMENT +- Provider: !no_auth +- Type: string +- Required: true + +#### --oos-region + Object storage Region -.PP + Properties: -.IP \[bu] 2 -Config: region -.IP \[bu] 2 -Env Var: RCLONE_OOS_REGION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --oos-endpoint -.PP + +- Config: region +- Env Var: RCLONE_OOS_REGION +- Type: string +- Required: true + +#### --oos-endpoint + Endpoint for Object storage API. -.PP + Leave blank to use the default endpoint for the region. -.PP + Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_OOS_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --oos-config-file -.PP + +- Config: endpoint +- Env Var: RCLONE_OOS_ENDPOINT +- Type: string +- Required: false + +#### --oos-config-file + Path to OCI config file -.PP + Properties: -.IP \[bu] 2 -Config: config_file -.IP \[bu] 2 -Env Var: RCLONE_OOS_CONFIG_FILE -.IP \[bu] 2 -Provider: user_principal_auth -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]\[ti]/.oci/config\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[ti]/.oci/config\[dq] -.RS 2 -.IP \[bu] 2 -oci configuration file location -.RE -.RE -.SS --oos-config-profile -.PP + +- Config: config_file +- Env Var: RCLONE_OOS_CONFIG_FILE +- Provider: user_principal_auth +- Type: string +- Default: \[dq]\[ti]/.oci/config\[dq] +- Examples: + - \[dq]\[ti]/.oci/config\[dq] + - oci configuration file location + +#### --oos-config-profile + Profile name inside the oci config file -.PP + Properties: -.IP \[bu] 2 -Config: config_profile -.IP \[bu] 2 -Env Var: RCLONE_OOS_CONFIG_PROFILE -.IP \[bu] 2 -Provider: user_principal_auth -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]Default\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]Default\[dq] -.RS 2 -.IP \[bu] 2 -Use the default profile -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to oracleobjectstorage (Oracle -Cloud Infrastructure Object Storage). -.SS --oos-storage-tier -.PP -The storage class to use when storing new objects in storage. -https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm -.PP + +- Config: config_profile +- Env Var: RCLONE_OOS_CONFIG_PROFILE +- Provider: user_principal_auth +- Type: string +- Default: \[dq]Default\[dq] +- Examples: + - \[dq]Default\[dq] + - Use the default profile + +### Advanced options + +Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). + +#### --oos-storage-tier + +The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm + Properties: -.IP \[bu] 2 -Config: storage_tier -.IP \[bu] 2 -Env Var: RCLONE_OOS_STORAGE_TIER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]Standard\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]Standard\[dq] -.RS 2 -.IP \[bu] 2 -Standard storage tier, this is the default tier -.RE -.IP \[bu] 2 -\[dq]InfrequentAccess\[dq] -.RS 2 -.IP \[bu] 2 -InfrequentAccess storage tier -.RE -.IP \[bu] 2 -\[dq]Archive\[dq] -.RS 2 -.IP \[bu] 2 -Archive storage tier -.RE -.RE -.SS --oos-upload-cutoff -.PP + +- Config: storage_tier +- Env Var: RCLONE_OOS_STORAGE_TIER +- Type: string +- Default: \[dq]Standard\[dq] +- Examples: + - \[dq]Standard\[dq] + - Standard storage tier, this is the default tier + - \[dq]InfrequentAccess\[dq] + - InfrequentAccess storage tier + - \[dq]Archive\[dq] + - Archive storage tier + +#### --oos-upload-cutoff + Cutoff for switching to chunked upload. -.PP + Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB. -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_OOS_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 200Mi -.SS --oos-chunk-size -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_OOS_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200Mi + +#### --oos-chunk-size + Chunk size to use for uploading. -.PP + When uploading files larger than upload_cutoff or files with unknown -size (e.g. -from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] or -google photos or google docs) they will be uploaded as multipart uploads -using this chunk size. -.PP +size (e.g. from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] they will be uploaded +as multipart uploads using this chunk size. + Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered in memory per transfer. -.PP + If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers. -.PP -Rclone will automatically increase the chunk size when uploading a large -file of known size to stay below the 10,000 chunks limit. -.PP -Files of unknown size are uploaded with the configured chunk_size. -Since the default chunk size is 5 MiB and there can be at most 10,000 -chunks, this means that by default the maximum size of a file you can -stream upload is 48 GiB. -If you wish to stream upload larger files then you will need to increase -chunk_size. -.PP + +Rclone will automatically increase the chunk size when uploading a +large file of known size to stay below the 10,000 chunks limit. + +Files of unknown size are uploaded with the configured +chunk_size. Since the default chunk size is 5 MiB and there can be at +most 10,000 chunks, this means that by default the maximum size of +a file you can stream upload is 48 GiB. If you wish to stream upload +larger files then you will need to increase chunk_size. + Increasing the chunk size decreases the accuracy of the progress statistics displayed with \[dq]-P\[dq] flag. -.PP + + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_OOS_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 5Mi -.SS --oos-upload-concurrency -.PP + +- Config: chunk_size +- Env Var: RCLONE_OOS_CHUNK_SIZE +- Type: SizeSuffix +- Default: 5Mi + +#### --oos-max-upload-parts + +Maximum number of parts in a multipart upload. + +This option defines the maximum number of multipart chunks to use +when doing a multipart upload. + +OCI has max parts limit of 10,000 chunks. + +Rclone will automatically increase the chunk size when uploading a +large file of a known size to stay below this number of chunks limit. + + +Properties: + +- Config: max_upload_parts +- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS +- Type: int +- Default: 10000 + +#### --oos-upload-concurrency + Concurrency for multipart uploads. -.PP + This is the number of chunks of the same file that are uploaded concurrently. -.PP + If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. -.PP + Properties: -.IP \[bu] 2 -Config: upload_concurrency -.IP \[bu] 2 -Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 10 -.SS --oos-copy-cutoff -.PP + +- Config: upload_concurrency +- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY +- Type: int +- Default: 10 + +#### --oos-copy-cutoff + Cutoff for switching to multipart copy. -.PP + Any files larger than this that need to be server-side copied will be copied in chunks of this size. -.PP + The minimum is 0 and the maximum is 5 GiB. -.PP + Properties: -.IP \[bu] 2 -Config: copy_cutoff -.IP \[bu] 2 -Env Var: RCLONE_OOS_COPY_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 4.656Gi -.SS --oos-copy-timeout -.PP + +- Config: copy_cutoff +- Env Var: RCLONE_OOS_COPY_CUTOFF +- Type: SizeSuffix +- Default: 4.656Gi + +#### --oos-copy-timeout + Timeout for copy. -.PP -Copy is an asynchronous operation, specify timeout to wait for copy to -succeed -.PP + +Copy is an asynchronous operation, specify timeout to wait for copy to succeed + + Properties: -.IP \[bu] 2 -Config: copy_timeout -.IP \[bu] 2 -Env Var: RCLONE_OOS_COPY_TIMEOUT -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 1m0s -.SS --oos-disable-checksum -.PP + +- Config: copy_timeout +- Env Var: RCLONE_OOS_COPY_TIMEOUT +- Type: Duration +- Default: 1m0s + +#### --oos-disable-checksum + Don\[aq]t store MD5 checksum with object metadata. -.PP + Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. -This is great for data integrity checking but can cause long delays for -large files to start uploading. -.PP +uploading it so it can add it to metadata on the object. This is great +for data integrity checking but can cause long delays for large files +to start uploading. + Properties: -.IP \[bu] 2 -Config: disable_checksum -.IP \[bu] 2 -Env Var: RCLONE_OOS_DISABLE_CHECKSUM -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --oos-encoding -.PP + +- Config: disable_checksum +- Env Var: RCLONE_OOS_DISABLE_CHECKSUM +- Type: bool +- Default: false + +#### --oos-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_OOS_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,InvalidUtf8,Dot -.SS --oos-leave-parts-on-error -.PP -If true avoid calling abort upload on a failure, leaving all -successfully uploaded parts on S3 for manual recovery. -.PP + +- Config: encoding +- Env Var: RCLONE_OOS_ENCODING +- Type: MultiEncoder +- Default: Slash,InvalidUtf8,Dot + +#### --oos-leave-parts-on-error + +If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. + It should be set to true for resuming uploads across different sessions. -.PP -WARNING: Storing parts of an incomplete multipart upload counts towards -space usage on object storage and will add additional costs if not -cleaned up. -.PP + +WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add +additional costs if not cleaned up. + + Properties: -.IP \[bu] 2 -Config: leave_parts_on_error -.IP \[bu] 2 -Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --oos-no-check-bucket -.PP + +- Config: leave_parts_on_error +- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR +- Type: bool +- Default: false + +#### --oos-attempt-resume-upload + +If true attempt to resume previously started multipart upload for the object. +This will be helpful to speed up multipart transfers by resuming uploads from past session. + +WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is +aborted and a new multipart upload is started with the new chunk size. + +The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. + + +Properties: + +- Config: attempt_resume_upload +- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD +- Type: bool +- Default: false + +#### --oos-no-check-bucket + If set, don\[aq]t attempt to check the bucket exists or create it. -.PP + This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already. -.PP + It can also be needed if the user you are using does not have bucket creation permissions. -.PP + + Properties: -.IP \[bu] 2 -Config: no_check_bucket -.IP \[bu] 2 -Env Var: RCLONE_OOS_NO_CHECK_BUCKET -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --oos-sse-customer-key-file -.PP -To use SSE-C, a file containing the base64-encoded string of the AES-256 -encryption key associated with the object. -Please note only one of -sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] -.PP + +- Config: no_check_bucket +- Env Var: RCLONE_OOS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +#### --oos-sse-customer-key-file + +To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated +with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] + Properties: -.IP \[bu] 2 -Config: sse_customer_key_file -.IP \[bu] 2 -Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -None -.RE -.RE -.SS --oos-sse-customer-key -.PP -To use SSE-C, the optional header that specifies the base64-encoded -256-bit encryption key to use to encrypt or decrypt the data. -Please note only one of -sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. -For more information, see Using Your Own Keys for Server-Side Encryption + +- Config: sse_customer_key_file +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - None + +#### --oos-sse-customer-key + +To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to +encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is +needed. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) -.PP + Properties: -.IP \[bu] 2 -Config: sse_customer_key -.IP \[bu] 2 -Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -None -.RE -.RE -.SS --oos-sse-customer-key-sha256 -.PP -If using SSE-C, The optional header that specifies the base64-encoded -SHA256 hash of the encryption key. -This value is used to check the integrity of the encryption key. -see Using Your Own Keys for Server-Side Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). -.PP + +- Config: sse_customer_key +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - None + +#### --oos-sse-customer-key-sha256 + +If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption +key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for +Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + Properties: -.IP \[bu] 2 -Config: sse_customer_key_sha256 -.IP \[bu] 2 -Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -None -.RE -.RE -.SS --oos-sse-kms-key-id -.PP -if using your own master key in vault, this header specifies the OCID -(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) -of a master encryption key used to call the Key Management service to -generate a data encryption key or to encrypt or decrypt a data -encryption key. -Please note only one of -sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. -.PP + +- Config: sse_customer_key_sha256 +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - None + +#### --oos-sse-kms-key-id + +if using your own master key in vault, this header specifies the +OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call +the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. +Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. + Properties: -.IP \[bu] 2 -Config: sse_kms_key_id -.IP \[bu] 2 -Env Var: RCLONE_OOS_SSE_KMS_KEY_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -None -.RE -.RE -.SS --oos-sse-customer-algorithm -.PP -If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as -the encryption algorithm. -Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. -For more information, see Using Your Own Keys for Server-Side Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). -.PP + +- Config: sse_kms_key_id +- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - None + +#### --oos-sse-customer-algorithm + +If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as the encryption algorithm. +Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. For more information, see +Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + Properties: -.IP \[bu] 2 -Config: sse_customer_algorithm -.IP \[bu] 2 -Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -None -.RE -.IP \[bu] 2 -\[dq]AES256\[dq] -.RS 2 -.IP \[bu] 2 -AES256 -.RE -.RE -.SS Backend commands -.PP + +- Config: sse_customer_algorithm +- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - None + - \[dq]AES256\[dq] + - AES256 + +## Backend commands + Here are the commands specific to the oracleobjectstorage backend. -.PP + Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP + + rclone backend COMMAND remote: + The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS rename -.PP +[backend/command](https://rclone.org/rc/#backend-command). + +### rename + change the name of an object -.IP -.nf -\f[C] -rclone backend rename remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend rename remote: [options] [+] + This command can be used to rename a object. -.PP + Usage Examples: -.IP -.nf -\f[C] -rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name -\f[R] -.fi -.SS list-multipart-uploads -.PP + + rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name + + +### list-multipart-uploads + List the unfinished multipart uploads -.IP -.nf -\f[C] -rclone backend list-multipart-uploads remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend list-multipart-uploads remote: [options] [+] + This command lists the unfinished multipart uploads in JSON format. -.IP -.nf -\f[C] -rclone backend list-multipart-uploads oos:bucket/path/to/object -\f[R] -.fi -.PP + + rclone backend list-multipart-uploads oos:bucket/path/to/object + It returns a dictionary of buckets with values as lists of unfinished multipart uploads. -.PP -You can call it with no bucket in which case it lists all bucket, with a -bucket or with a bucket and path. -.IP -.nf -\f[C] -{ - \[dq]test-bucket\[dq]: [ - { - \[dq]namespace\[dq]: \[dq]test-namespace\[dq], - \[dq]bucket\[dq]: \[dq]test-bucket\[dq], - \[dq]object\[dq]: \[dq]600m.bin\[dq], - \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], - \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], - \[dq]storageTier\[dq]: \[dq]Standard\[dq] - } - ] -\f[R] -.fi -.SS cleanup -.PP + +You can call it with no bucket in which case it lists all bucket, with +a bucket or with a bucket and path. + + { + \[dq]test-bucket\[dq]: [ + { + \[dq]namespace\[dq]: \[dq]test-namespace\[dq], + \[dq]bucket\[dq]: \[dq]test-bucket\[dq], + \[dq]object\[dq]: \[dq]600m.bin\[dq], + \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], + \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], + \[dq]storageTier\[dq]: \[dq]Standard\[dq] + } + ] + + +### cleanup + Remove unfinished multipart uploads. -.IP -.nf -\f[C] -rclone backend cleanup remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend cleanup remote: [options] [+] + This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. -.PP -Note that you can use --interactive/-i or --dry-run with this command to -see what it would do. -.IP -.nf -\f[C] -rclone backend cleanup oos:bucket/path/to/object -rclone backend cleanup -o max-age=7w oos:bucket/path/to/object -\f[R] -.fi -.PP + +Note that you can use --interactive/-i or --dry-run with this command to see what +it would do. + + rclone backend cleanup oos:bucket/path/to/object + rclone backend cleanup -o max-age=7w oos:bucket/path/to/object + Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. -.PP + + Options: -.IP \[bu] 2 -\[dq]max-age\[dq]: Max age of upload to delete -.SH QingStor -.PP -Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for -the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:bucket/path/to/dir\f[R]. -.SS Configuration -.PP -Here is an example of making an QingStor configuration. -First run -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP + +- \[dq]max-age\[dq]: Max age of upload to delete + + + +## Tutorials +### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) + +# QingStor + +Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. + +## Configuration + +Here is an example of making an QingStor configuration. First run + + rclone config + This will guide you through an interactive setup process. -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / QingStor Object Storage - \[rs] \[dq]qingstor\[dq] -[snip] -Storage> qingstor -Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value - 1 / Enter QingStor credentials in the next step - \[rs] \[dq]false\[dq] - 2 / Get QingStor credentials from the environment (env vars or IAM) - \[rs] \[dq]true\[dq] -env_auth> 1 -QingStor Access Key ID - leave blank for anonymous access or runtime credentials. -access_key_id> access_key -QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. -secret_access_key> secret_key -Enter an endpoint URL to connection QingStor API. -Leave blank will use the default value \[dq]https://qingstor.com:443\[dq] -endpoint> -Zone connect to. Default is \[dq]pek3a\[dq]. -Choose a number from below, or type in your own value - / The Beijing (China) Three Zone - 1 | Needs location constraint pek3a. - \[rs] \[dq]pek3a\[dq] - / The Shanghai (China) First Zone - 2 | Needs location constraint sh1a. - \[rs] \[dq]sh1a\[dq] -zone> 1 -Number of connection retry. -Leave blank will use the default value \[dq]3\[dq]. -connection_retries> -Remote config --------------------- -[remote] -env_auth = false -access_key_id = access_key -secret_access_key = secret_key -endpoint = -zone = pek3a -connection_retries = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this -.PP -See all buckets -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -Make a new bucket -.IP -.nf -\f[C] -rclone mkdir remote:bucket -\f[R] -.fi -.PP -List the contents of a bucket -.IP -.nf -\f[C] -rclone ls remote:bucket -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote bucket, deleting any -excess files in the bucket. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:bucket -\f[R] -.fi -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.SS Multipart uploads -.PP -rclone supports multipart uploads with QingStor which means that it can -upload files bigger than 5 GiB. -Note that files uploaded with multipart upload don\[aq]t have an MD5SUM. -.PP -Note that incomplete multipart uploads older than 24 hours can be -removed with \f[C]rclone cleanup remote:bucket\f[R] just for one bucket -\f[C]rclone cleanup remote:\f[R] for all buckets. -QingStor does not ever remove incomplete multipart uploads so it may be -necessary to run this from time to time. -.SS Buckets and Zone -.PP -With QingStor you can list buckets (\f[C]rclone lsd\f[R]) using any -zone, but you can only access the content of a bucket from the zone it -was created in. -If you attempt to access a bucket from the wrong zone, you will get an -error, -\f[C]incorrect zone, the bucket is not in \[aq]XXX\[aq] zone\f[R]. -.SS Authentication -.PP -There are two ways to supply \f[C]rclone\f[R] with a set of QingStor -credentials. -In order of precedence: -.IP \[bu] 2 -Directly in the rclone configuration file (as configured by -\f[C]rclone config\f[R]) -.RS 2 -.IP \[bu] 2 -set \f[C]access_key_id\f[R] and \f[C]secret_access_key\f[R] -.RE -.IP \[bu] 2 -Runtime configuration: -.RS 2 -.IP \[bu] 2 -set \f[C]env_auth\f[R] to \f[C]true\f[R] in the config file -.IP \[bu] 2 -Exporting the following environment variables before running -\f[C]rclone\f[R] -.RS 2 -.IP \[bu] 2 -Access Key ID: \f[C]QS_ACCESS_KEY_ID\f[R] or \f[C]QS_ACCESS_KEY\f[R] -.IP \[bu] 2 -Secret Access Key: \f[C]QS_SECRET_ACCESS_KEY\f[R] or -\f[C]QS_SECRET_KEY\f[R] -.RE -.RE -.SS Restricted filename characters -.PP -The control characters 0x00-0x1F and / are replaced as in the default -restricted characters -set (https://rclone.org/overview/#restricted-characters). -Note that 0x7F is not replaced. -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP -Here are the Standard options specific to qingstor (QingCloud Object -Storage). -.SS --qingstor-env-auth -.PP +n) New remote r) Rename remote c) Copy remote s) Set configuration +password q) Quit config n/r/c/s/q> n name> remote Type of storage to +configure. +Choose a number from below, or type in your own value [snip] XX / +QingStor Object Storage \ \[dq]qingstor\[dq] [snip] Storage> qingstor Get QingStor credentials from runtime. -.PP Only applies if access_key_id and secret_access_key is blank. -.PP -Properties: -.IP \[bu] 2 -Config: env_auth -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_ENV_AUTH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Enter QingStor credentials in the next step. -.RE -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Get QingStor credentials from the environment (env vars or IAM). -.RE -.RE -.SS --qingstor-access-key-id -.PP -QingStor Access Key ID. -.PP -Leave blank for anonymous access or runtime credentials. -.PP -Properties: -.IP \[bu] 2 -Config: access_key_id -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --qingstor-secret-access-key -.PP -QingStor Secret Access Key (password). -.PP -Leave blank for anonymous access or runtime credentials. -.PP -Properties: -.IP \[bu] 2 -Config: secret_access_key -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --qingstor-endpoint -.PP -Enter an endpoint URL to connection QingStor API. -.PP +Choose a number from below, or type in your own value 1 / Enter QingStor +credentials in the next step \ \[dq]false\[dq] 2 / Get QingStor +credentials from the environment (env vars or IAM) \ \[dq]true\[dq] +env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or +runtime credentials. +access_key_id> access_key QingStor Secret Access Key (password) - leave +blank for anonymous access or runtime credentials. +secret_access_key> secret_key Enter an endpoint URL to connection +QingStor API. Leave blank will use the default value -\[dq]https://qingstor.com:443\[dq]. -.PP -Properties: -.IP \[bu] 2 -Config: endpoint -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_ENDPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --qingstor-zone -.PP -Zone to connect to. -.PP +\[dq]https://qingstor.com:443\[dq] endpoint> Zone connect to. Default is \[dq]pek3a\[dq]. -.PP +Choose a number from below, or type in your own value / The Beijing +(China) Three Zone 1 | Needs location constraint pek3a. +\ \[dq]pek3a\[dq] / The Shanghai (China) First Zone 2 | Needs location +constraint sh1a. +\ \[dq]sh1a\[dq] zone> 1 Number of connection retry. +Leave blank will use the default value \[dq]3\[dq]. +connection_retries> Remote config -------------------- [remote] env_auth += false access_key_id = access_key secret_access_key = secret_key +endpoint = zone = pek3a connection_retries = -------------------- y) Yes +this is OK e) Edit this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this + +See all buckets + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync \[ga]/home/local/directory\[ga] to the remote bucket, deleting any excess +files in the bucket. + + rclone sync --interactive /home/local/directory remote:bucket + +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +### Multipart uploads + +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5 GiB. Note that files uploaded with multipart +upload don\[aq]t have an MD5SUM. + +Note that incomplete multipart uploads older than 24 hours can be +removed with \[ga]rclone cleanup remote:bucket\[ga] just for one bucket +\[ga]rclone cleanup remote:\[ga] for all buckets. QingStor does not ever +remove incomplete multipart uploads so it may be necessary to run this +from time to time. + +### Buckets and Zone + +With QingStor you can list buckets (\[ga]rclone lsd\[ga]) using any zone, +but you can only access the content of a bucket from the zone it was +created in. If you attempt to access a bucket from the wrong zone, +you will get an error, \[ga]incorrect zone, the bucket is not in \[aq]XXX\[aq] +zone\[ga]. + +### Authentication + +There are two ways to supply \[ga]rclone\[ga] with a set of QingStor +credentials. In order of precedence: + + - Directly in the rclone configuration file (as configured by \[ga]rclone config\[ga]) + - set \[ga]access_key_id\[ga] and \[ga]secret_access_key\[ga] + - Runtime configuration: + - set \[ga]env_auth\[ga] to \[ga]true\[ga] in the config file + - Exporting the following environment variables before running \[ga]rclone\[ga] + - Access Key ID: \[ga]QS_ACCESS_KEY_ID\[ga] or \[ga]QS_ACCESS_KEY\[ga] + - Secret Access Key: \[ga]QS_SECRET_ACCESS_KEY\[ga] or \[ga]QS_SECRET_KEY\[ga] + +### Restricted filename characters + +The control characters 0x00-0x1F and / are replaced as in the [default +restricted characters set](https://rclone.org/overview/#restricted-characters). Note +that 0x7F is not replaced. + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to qingstor (QingCloud Object Storage). + +#### --qingstor-env-auth + +Get QingStor credentials from runtime. + +Only applies if access_key_id and secret_access_key is blank. + Properties: -.IP \[bu] 2 -Config: zone -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_ZONE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]pek3a\[dq] -.RS 2 -.IP \[bu] 2 -The Beijing (China) Three Zone. -.IP \[bu] 2 -Needs location constraint pek3a. -.RE -.IP \[bu] 2 -\[dq]sh1a\[dq] -.RS 2 -.IP \[bu] 2 -The Shanghai (China) First Zone. -.IP \[bu] 2 -Needs location constraint sh1a. -.RE -.IP \[bu] 2 -\[dq]gd2a\[dq] -.RS 2 -.IP \[bu] 2 -The Guangdong (China) Second Zone. -.IP \[bu] 2 -Needs location constraint gd2a. -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to qingstor (QingCloud Object -Storage). -.SS --qingstor-connection-retries -.PP + +- Config: env_auth +- Env Var: RCLONE_QINGSTOR_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - \[dq]false\[dq] + - Enter QingStor credentials in the next step. + - \[dq]true\[dq] + - Get QingStor credentials from the environment (env vars or IAM). + +#### --qingstor-access-key-id + +QingStor Access Key ID. + +Leave blank for anonymous access or runtime credentials. + +Properties: + +- Config: access_key_id +- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID +- Type: string +- Required: false + +#### --qingstor-secret-access-key + +QingStor Secret Access Key (password). + +Leave blank for anonymous access or runtime credentials. + +Properties: + +- Config: secret_access_key +- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY +- Type: string +- Required: false + +#### --qingstor-endpoint + +Enter an endpoint URL to connection QingStor API. + +Leave blank will use the default value \[dq]https://qingstor.com:443\[dq]. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_QINGSTOR_ENDPOINT +- Type: string +- Required: false + +#### --qingstor-zone + +Zone to connect to. + +Default is \[dq]pek3a\[dq]. + +Properties: + +- Config: zone +- Env Var: RCLONE_QINGSTOR_ZONE +- Type: string +- Required: false +- Examples: + - \[dq]pek3a\[dq] + - The Beijing (China) Three Zone. + - Needs location constraint pek3a. + - \[dq]sh1a\[dq] + - The Shanghai (China) First Zone. + - Needs location constraint sh1a. + - \[dq]gd2a\[dq] + - The Guangdong (China) Second Zone. + - Needs location constraint gd2a. + +### Advanced options + +Here are the Advanced options specific to qingstor (QingCloud Object Storage). + +#### --qingstor-connection-retries + Number of connection retries. -.PP + Properties: -.IP \[bu] 2 -Config: connection_retries -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 3 -.SS --qingstor-upload-cutoff -.PP + +- Config: connection_retries +- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES +- Type: int +- Default: 3 + +#### --qingstor-upload-cutoff + Cutoff for switching to chunked upload. -.PP + Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB. -.PP + Properties: -.IP \[bu] 2 -Config: upload_cutoff -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 200Mi -.SS --qingstor-chunk-size -.PP + +- Config: upload_cutoff +- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200Mi + +#### --qingstor-chunk-size + Chunk size to use for uploading. -.PP -When uploading files larger than upload_cutoff they will be uploaded as -multipart uploads using this chunk size. -.PP -Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size -are buffered in memory per transfer. -.PP + +When uploading files larger than upload_cutoff they will be uploaded +as multipart uploads using this chunk size. + +Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size are buffered +in memory per transfer. + If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers. -.PP + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 4Mi -.SS --qingstor-upload-concurrency -.PP + +- Config: chunk_size +- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +- Type: SizeSuffix +- Default: 4Mi + +#### --qingstor-upload-concurrency + Concurrency for multipart uploads. -.PP + This is the number of chunks of the same file that are uploaded concurrently. -.PP -NB if you set this to > 1 then the checksums of multipart uploads become -corrupted (the uploads themselves are not corrupted though). -.PP + +NB if you set this to > 1 then the checksums of multipart uploads +become corrupted (the uploads themselves are not corrupted though). + If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. -.PP + Properties: -.IP \[bu] 2 -Config: upload_concurrency -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 1 -.SS --qingstor-encoding -.PP + +- Config: upload_concurrency +- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +- Type: int +- Default: 1 + +#### --qingstor-encoding + The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_QINGSTOR_ENCODING +- Type: MultiEncoder +- Default: Slash,Ctl,InvalidUtf8 + + + +## Limitations + +\[ga]rclone about\[ga] is not supported by the qingstor backend. Backends without +this capability cannot determine free space for an rclone mount or +use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union +remote. + +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) + +# Quatrix + +Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g., \[ga]remote:directory/subdirectory\[ga]. + +The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user\[aq]s profile at \[ga]https:///profile/api-keys\[ga] +or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. + +See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer + +## Configuration + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi .PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +Quatrix by Maytech \ \[dq]quatrix\[dq] [snip] Storage> quatrix API key +for accessing Quatrix account. +api_key> your_api_key Host name of Quatrix account. +host> example.quatrix.it +.PP +.TS +tab(@); +lw(20.4n). +T{ +[remote] api_key = your_api_key host = example.quatrix.it +T} +_ +T{ +y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +\[ga]\[ga]\[ga] +T} +T{ +Once configured you can then use \f[C]rclone\f[R] like this, +T} +T{ +List directories in top level of your Quatrix +T} +T{ +rclone lsd remote: +T} +T{ +List all the files in your Quatrix +T} +T{ +rclone ls remote: +T} +T{ +To copy a local directory to an Quatrix directory called backup +T} +T{ +rclone copy /home/source remote:backup +T} +T{ +### API key validity +T} +T{ +API Key is created with no expiration date. +It will be valid until you delete or deactivate it in your account. +After disabling, the API Key can be enabled back. +If the API Key was deleted and a new key was created, you can update it +in rclone config. +The same happens if the hostname was changed. +T} +T{ +\[ga]\[ga]\[ga] $ rclone config Current remotes: +T} +T{ +Name Type ==== ==== remote quatrix +T} +T{ +e) Edit existing remote n) New remote d) Delete remote r) Rename remote +c) Copy remote s) Set configuration password q) Quit config +e/n/d/r/c/s/q> e Choose a number from below, or type in an existing +value 1 > remote remote> remote +T} +.TE +.PP +[remote] type = quatrix host = some_host.quatrix.it api_key = +your_api_key -------------------- Edit remote Option api_key. +API key for accessing Quatrix account Enter a string value. +Press Enter for the default (your_api_key) api_key> Option host. +Host name of Quatrix account Enter a string value. +Press Enter for the default (some_host.quatrix.it). +.PP +.TS +tab(@); +lw(20.4n). +T{ +[remote] type = quatrix host = some_host.quatrix.it api_key = +your_api_key +T} +_ +T{ +y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +\[ga]\[ga]\[ga] +T} +T{ +### Modified time and hashes +T} +T{ +Quatrix allows modification times to be set on objects accurate to 1 +microsecond. +These will be used to detect whether objects need syncing or not. +T} +T{ +Quatrix does not support hashes, so you cannot use the +\f[C]--checksum\f[R] flag. +T} +T{ +### Restricted filename characters +T} +T{ +File names in Quatrix are case sensitive and have limitations like the +maximum length of a filename is 255, and the minimum length is 1. +A file name cannot be equal to \f[C].\f[R] or \f[C]..\f[R] nor contain +\f[C]/\f[R] , \f[C]\[rs]\f[R] or non-printable ascii. +T} +T{ +### Transfers +T} +T{ +For files above 50 MiB rclone will use a chunked transfer. +Rclone will upload up to \f[C]--transfers\f[R] chunks at the same time +(shared among all multipart uploads). +Chunks are buffered in memory, and the minimal chunk size is 10_000_000 +bytes by default, and it can be changed in the advanced configuration, +so increasing \f[C]--transfers\f[R] will increase the memory use. +The chunk size has a maximum size limit, which is set to 100_000_000 +bytes by default and can be changed in the advanced configuration. +The size of the uploaded chunk will dynamically change depending on the +upload speed. +The total memory use equals the number of transfers multiplied by the +minimal chunk size. +In case there\[aq]s free memory allocated for the upload (which equals +the difference of \f[C]maximal_summary_chunk_size\f[R] and +\f[C]minimal_chunk_size\f[R] * \f[C]transfers\f[R]), the chunk size may +increase in case of high upload speed. +As well as it can decrease in case of upload speed problems. +If no free memory is available, all chunks will equal +\f[C]minimal_chunk_size\f[R]. +T} +T{ +### Deleting files +T} +T{ +Files you delete with rclone will end up in Trash and be stored there +for 30 days. +Quatrix also provides an API to permanently delete files and an API to +empty the Trash so that you can remove files permanently from your +account. +T} +T{ +### Standard options +T} +T{ +Here are the Standard options specific to quatrix (Quatrix by Maytech). +T} +T{ +#### --quatrix-api-key +T} +T{ +API key for accessing Quatrix account +T} +T{ +Properties: +T} +T{ +- Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - +Required: true +T} +T{ +#### --quatrix-host +T} +T{ +Host name of Quatrix account +T} +T{ +Properties: +T} +T{ +- Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: +true +T} +T{ +### Advanced options +T} +T{ +Here are the Advanced options specific to quatrix (Quatrix by Maytech). +T} +T{ +#### --quatrix-encoding +T} +T{ +The encoding for the backend. +T} +T{ See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info. -.PP +T} +T{ Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_QINGSTOR_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Ctl,InvalidUtf8 -.SS Limitations -.PP -\f[C]rclone about\f[R] is not supported by the qingstor backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) -.SH Sia -.PP +T} +T{ +- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: +MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +T} +T{ +#### --quatrix-effective-upload-time +T} +T{ +Wanted upload time for one chunk +T} +T{ +Properties: +T} +T{ +- Config: effective_upload_time - Env Var: +RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: +\[dq]4s\[dq] +T} +T{ +#### --quatrix-minimal-chunk-size +T} +T{ +The minimal size for one chunk +T} +T{ +Properties: +T} +T{ +- Config: minimal_chunk_size - Env Var: +RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi +T} +T{ +#### --quatrix-maximal-summary-chunk-size +T} +T{ +The maximal summary for all chunks. +It should not be less than +\[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] +T} +T{ +Properties: +T} +T{ +- Config: maximal_summary_chunk_size - Env Var: +RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: +95.367Mi +T} +T{ +#### --quatrix-hard-delete +T} +T{ +Delete files permanently rather than putting them into the trash. +T} +T{ +Properties: +T} +T{ +- Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool +- Default: false +T} +T{ +## Storage usage +T} +T{ +The storage usage in Quatrix is restricted to the account during the +purchase. +You can restrict any user with a smaller storage limit. +The account limit is applied if the user has no custom storage limit. +Once you\[aq]ve reached the limit, the upload of files will fail. +This can be fixed by freeing up the space or increasing the quota. +T} +T{ +## Server-side operations +T} +T{ +Quatrix supports server-side operations (copy and move). +In case of conflict, files are overwritten during server-side operation. +T} +T{ +# Sia +T} +T{ Sia (sia.tech (https://sia.tech/)) is a decentralized cloud storage platform based on the blockchain (https://wikipedia.org/wiki/Blockchain) technology. @@ -51524,22 +50481,27 @@ Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you\[aq]d better first familiarize yourself using their excellent support documentation (https://support.sia.tech/). -.SS Introduction -.PP +T} +T{ +## Introduction +T} +T{ Before you can use rclone with Sia, you will need to have a running copy of \f[C]Sia-UI\f[R] or \f[C]siad\f[R] (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started (https://sia.tech/get-started) guide and install one. -.PP +T} +T{ rclone interacts with Sia network by talking to the Sia daemon via HTTP API (https://sia.tech/docs/) which is usually available on port \f[I]9980\f[R]. By default you will run the daemon locally on the same computer so it\[aq]s safe to leave the API password blank (the API URL will be \f[C]http://127.0.0.1:9980\f[R] making external access impossible). -.PP +T} +T{ However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you\[aq]ll need to @@ -51555,7 +50517,8 @@ variable \f[C]SIA_API_PASSWORD\f[R] or text file named \f[C]apipassword\f[R] in the daemon directory. - Set rclone backend option \f[C]api_password\f[R] taking it from above locations. -.PP +T} +T{ Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command @@ -51577,2800 +50540,2839 @@ The only way to use \f[C]siad\f[R] without API password is to run it \f[B]on localhost\f[R] with command line argument \f[C]--authorize-api=false\f[R], but this is insecure and \f[B]strongly discouraged\f[R]. -.SS Configuration -.PP +T} +T{ +## Configuration +T} +T{ Here is an example of how to make a \f[C]sia\f[R] remote called \f[C]mySia\f[R]. First, run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP +T} +T{ +rclone config +T} +T{ This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> mySia -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -\&... -29 / Sia Decentralized Cloud - \[rs] \[dq]sia\[dq] -\&... -Storage> sia -Sia daemon API URL, like http://sia.daemon.host:9980. -Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). -Keep default if Sia daemon runs on localhost. -Enter a string value. Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). -api_url> http://127.0.0.1:9980 -Sia Daemon API Password. -Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g/n> y -Enter the password: -password: -Confirm the password: -password: -Edit advanced config? -y) Yes -n) No (default) -y/n> n --------------------- -[mySia] -type = sia -api_url = http://127.0.0.1:9980 -api_password = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -Once configured, you can then use \f[C]rclone\f[R] like this: -.IP \[bu] 2 -List directories in top level of your Sia storage -.IP -.nf -\f[C] -rclone lsd mySia: -\f[R] -.fi -.IP \[bu] 2 -List all the files in your Sia storage -.IP -.nf -\f[C] -rclone ls mySia: -\f[R] -.fi -.IP \[bu] 2 -Upload a local directory to the Sia directory called \f[I]backup\f[R] -.IP -.nf -\f[C] -rclone copy /home/source mySia:backup -\f[R] -.fi -.SS Standard options -.PP -Here are the Standard options specific to sia (Sia Decentralized Cloud). -.SS --sia-api-url -.PP -Sia daemon API URL, like http://sia.daemon.host:9980. -.PP +T} +T{ +\[ga]\[ga]\[ga] No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> mySia Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value ... +29 / Sia Decentralized Cloud \ \[dq]sia\[dq] ... +Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. -.PP -Properties: -.IP \[bu] 2 -Config: api_url -.IP \[bu] 2 -Env Var: RCLONE_SIA_API_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]http://127.0.0.1:9980\[dq] -.SS --sia-api-password -.PP -Sia Daemon API Password. -.PP +Enter a string value. +Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). +api_url> http://127.0.0.1:9980 Sia Daemon API Password. Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP -Properties: -.IP \[bu] 2 -Config: api_password -.IP \[bu] 2 -Env Var: RCLONE_SIA_API_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP -Here are the Advanced options specific to sia (Sia Decentralized Cloud). -.SS --sia-user-agent -.PP -Siad User Agent -.PP -Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for -security -.PP -Properties: -.IP \[bu] 2 -Config: user_agent -.IP \[bu] 2 -Env Var: RCLONE_SIA_USER_AGENT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]Sia-Agent\[dq] -.SS --sia-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_SIA_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.IP \[bu] 2 -Modification times not supported -.IP \[bu] 2 -Checksums not supported -.IP \[bu] 2 -\f[C]rclone about\f[R] not supported -.IP \[bu] 2 -rclone can work only with \f[I]Siad\f[R] or \f[I]Sia-UI\f[R] at the -moment, the \f[B]SkyNet daemon is not supported yet.\f[R] -.IP \[bu] 2 -Sia does not allow control characters or symbols like question and pound -signs in file names. -rclone will transparently encode (https://rclone.org/overview/#encoding) -them for you, but you\[aq]d better be aware -.SH Swift -.PP -Swift refers to OpenStack Object -Storage (https://docs.openstack.org/swift/latest/). -Commercial implementations of that being: -.IP \[bu] 2 -Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) -.IP \[bu] 2 -Memset Memstore (https://www.memset.com/cloud/storage/) -.IP \[bu] 2 -OVH Object -Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/) -.IP \[bu] 2 -Oracle Cloud -Storage (https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) -.IP \[bu] 2 -Blomp Cloud Storage (https://www.blomp.com/cloud-storage/) -.IP \[bu] 2 -IBM Bluemix Cloud ObjectStorage -Swift (https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) -.PP -Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R] -for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. -\f[C]remote:container/path/to/dir\f[R]. -.SS Configuration -.PP -Here is an example of making a swift configuration. -First run -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process. -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) - \[rs] \[dq]swift\[dq] -[snip] -Storage> swift -Get swift credentials from environment variables in standard OpenStack form. -Choose a number from below, or type in your own value - 1 / Enter swift credentials in the next step - \[rs] \[dq]false\[dq] - 2 / Get swift credentials from environment vars. Leave other fields blank if using this. - \[rs] \[dq]true\[dq] -env_auth> true -User name to log in (OS_USERNAME). -user> -API key or password (OS_PASSWORD). -key> -Authentication URL for server (OS_AUTH_URL). -Choose a number from below, or type in your own value - 1 / Rackspace US - \[rs] \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] - 2 / Rackspace UK - \[rs] \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] - 3 / Rackspace v2 - \[rs] \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] - 4 / Memset Memstore UK - \[rs] \[dq]https://auth.storage.memset.com/v1.0\[dq] - 5 / Memset Memstore UK v2 - \[rs] \[dq]https://auth.storage.memset.com/v2.0\[dq] - 6 / OVH - \[rs] \[dq]https://auth.cloud.ovh.net/v3\[dq] - 7 / Blomp Cloud Storage - \[rs] \[dq]https://authenticate.ain.net\[dq] -auth> -User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). -user_id> -User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) -domain> -Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) -tenant> -Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) -tenant_id> -Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) -tenant_domain> -Region name - optional (OS_REGION_NAME) -region> -Storage URL - optional (OS_STORAGE_URL) -storage_url> -Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) -auth_token> -AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) -auth_version> -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) -Choose a number from below, or type in your own value - 1 / Public (default, choose this if not sure) - \[rs] \[dq]public\[dq] - 2 / Internal (use internal service net) - \[rs] \[dq]internal\[dq] - 3 / Admin - \[rs] \[dq]admin\[dq] -endpoint_type> -Remote config --------------------- -[test] -env_auth = true -user = -key = -auth = -user_id = -domain = -tenant = -tenant_id = -tenant_domain = -region = -storage_url = -auth_token = -auth_version = -endpoint_type = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this -.PP -See all containers -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -Make a new container -.IP -.nf -\f[C] -rclone mkdir remote:container -\f[R] -.fi -.PP -List the contents of a container -.IP -.nf -\f[C] -rclone ls remote:container -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote container, deleting -any excess files in the container. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:container -\f[R] -.fi -.SS Configuration from an OpenStack credentials file -.PP -An OpenStack credentials file typically looks something something like -this (without the comments) -.IP -.nf -\f[C] -export OS_AUTH_URL=https://a.provider.net/v2.0 -export OS_TENANT_ID=ffffffffffffffffffffffffffffffff -export OS_TENANT_NAME=\[dq]1234567890123456\[dq] -export OS_USERNAME=\[dq]123abc567xy\[dq] -echo \[dq]Please enter your OpenStack Password: \[dq] -read -sr OS_PASSWORD_INPUT -export OS_PASSWORD=$OS_PASSWORD_INPUT -export OS_REGION_NAME=\[dq]SBG1\[dq] -if [ -z \[dq]$OS_REGION_NAME\[dq] ]; then unset OS_REGION_NAME; fi -\f[R] -.fi -.PP -The config file needs to look something like this where -\f[C]$OS_USERNAME\f[R] represents the value of the \f[C]OS_USERNAME\f[R] -variable - \f[C]123abc567xy\f[R] in the example above. -.IP -.nf -\f[C] -[remote] -type = swift -user = $OS_USERNAME -key = $OS_PASSWORD -auth = $OS_AUTH_URL -tenant = $OS_TENANT_NAME -\f[R] -.fi -.PP -Note that you may (or may not) need to set \f[C]region\f[R] too - try -without first. -.SS Configuration from the environment -.PP -If you prefer you can configure rclone to use swift using a standard set -of OpenStack environment variables. -.PP -When you run through the config, make sure you choose \f[C]true\f[R] for -\f[C]env_auth\f[R] and leave everything else blank. -.PP -rclone will then set any empty config parameters from the environment -using standard OpenStack environment variables. -There is a list of the -variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) -in the docs for the swift library. -.SS Using an alternate authentication method -.PP -If your OpenStack installation uses a non-standard authentication method -that might not be yet supported by rclone or the underlying swift -library, you can authenticate externally (e.g. -calling manually the \f[C]openstack\f[R] commands to get a token). -Then, you just need to pass the two configuration variables -\f[C]auth_token\f[R] and \f[C]storage_url\f[R]. -If they are both provided, the other variables are ignored. -rclone will not try to authenticate but instead assume it is already -authenticated and use these two variables to access the OpenStack -installation. -.SS Using rclone without a config file -.PP -You can use rclone with swift without a config file, if desired, like -this: -.IP -.nf -\f[C] -source openstack-credentials-file -export RCLONE_CONFIG_MYREMOTE_TYPE=swift -export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true -rclone lsd myremote: -\f[R] -.fi -.SS --fast-list -.PP -This remote supports \f[C]--fast-list\f[R] which allows you to use fewer -transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. -.SS --update and --use-server-modtime -.PP -As noted below, the modified time is stored on metadata on the object. -It is used by default for all operations that require checking the time -a file was last updated. -It allows rclone to treat the remote more like a true filesystem, but it -is inefficient because it requires an extra API call to retrieve the -metadata. -.PP -For many operations, the time the object was last uploaded to the remote -is sufficient to determine if it is \[dq]dirty\[dq]. -By using \f[C]--update\f[R] along with \f[C]--use-server-modtime\f[R], -you can avoid the extra API call and simply upload files whose local -modtime is newer than the time it was last uploaded. -.SS Modified time -.PP -The modified time is stored as metadata on the object as -\f[C]X-Object-Meta-Mtime\f[R] as floating point since the epoch accurate -to 1 ns. -.PP -This is a de facto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank (default) y/g/n> y Enter the password: +password: Confirm the password: password: Edit advanced config? +y) Yes n) No (default) y/n> n T} .TE .PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options +[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** +ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit +this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +Once configured, you can then use \[ga]rclone\[ga] like this: + +- List directories in top level of your Sia storage +\f[R] +.fi .PP -Here are the Standard options specific to swift (OpenStack Swift -(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). -.SS --swift-env-auth +rclone lsd mySia: +.IP +.nf +\f[C] +- List all the files in your Sia storage +\f[R] +.fi .PP -Get swift credentials from environment variables in standard OpenStack -form. +rclone ls mySia: +.IP +.nf +\f[C] +- Upload a local directory to the Sia directory called _backup_ +\f[R] +.fi .PP +rclone copy /home/source mySia:backup +.IP +.nf +\f[C] + +### Standard options + +Here are the Standard options specific to sia (Sia Decentralized Cloud). + +#### --sia-api-url + +Sia daemon API URL, like http://sia.daemon.host:9980. + +Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). +Keep default if Sia daemon runs on localhost. + Properties: -.IP \[bu] 2 -Config: env_auth -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_ENV_AUTH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]false\[dq] -.RS 2 -.IP \[bu] 2 -Enter swift credentials in the next step. -.RE -.IP \[bu] 2 -\[dq]true\[dq] -.RS 2 -.IP \[bu] 2 -Get swift credentials from environment vars. -.IP \[bu] 2 + +- Config: api_url +- Env Var: RCLONE_SIA_API_URL +- Type: string +- Default: \[dq]http://127.0.0.1:9980\[dq] + +#### --sia-api-password + +Sia Daemon API Password. + +Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: api_password +- Env Var: RCLONE_SIA_API_PASSWORD +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to sia (Sia Decentralized Cloud). + +#### --sia-user-agent + +Siad User Agent + +Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for security + +Properties: + +- Config: user_agent +- Env Var: RCLONE_SIA_USER_AGENT +- Type: string +- Default: \[dq]Sia-Agent\[dq] + +#### --sia-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_SIA_ENCODING +- Type: MultiEncoder +- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +- Modification times not supported +- Checksums not supported +- \[ga]rclone about\[ga] not supported +- rclone can work only with _Siad_ or _Sia-UI_ at the moment, + the **SkyNet daemon is not supported yet.** +- Sia does not allow control characters or symbols like question and pound + signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) + them for you, but you\[aq]d better be aware + +# Swift + +Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). +Commercial implementations of that being: + + * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) + * [Memset Memstore](https://www.memset.com/cloud/storage/) + * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) + * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) + * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) + * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) + +Paths are specified as \[ga]remote:container\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] +command.) You may put subdirectories in too, e.g. \[ga]remote:container/path/to/dir\[ga]. + +## Configuration + +Here is an example of making a swift configuration. First run + + rclone config + +This will guide you through an interactive setup process. +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset +Memstore, OVH) \ \[dq]swift\[dq] [snip] Storage> swift Get swift +credentials from environment variables in standard OpenStack form. +Choose a number from below, or type in your own value 1 / Enter swift +credentials in the next step \ \[dq]false\[dq] 2 / Get swift credentials +from environment vars. Leave other fields blank if using this. -.RE -.RE -.SS --swift-user -.PP -User name to log in (OS_USERNAME). -.PP -Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-key -.PP -API key or password (OS_PASSWORD). -.PP -Properties: -.IP \[bu] 2 -Config: key -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-auth -.PP -Authentication URL for server (OS_AUTH_URL). -.PP -Properties: -.IP \[bu] 2 -Config: auth -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_AUTH -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] -.RS 2 -.IP \[bu] 2 -Rackspace US -.RE -.IP \[bu] 2 -\[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] -.RS 2 -.IP \[bu] 2 -Rackspace UK -.RE -.IP \[bu] 2 -\[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] -.RS 2 -.IP \[bu] 2 -Rackspace v2 -.RE -.IP \[bu] 2 -\[dq]https://auth.storage.memset.com/v1.0\[dq] -.RS 2 -.IP \[bu] 2 -Memset Memstore UK -.RE -.IP \[bu] 2 -\[dq]https://auth.storage.memset.com/v2.0\[dq] -.RS 2 -.IP \[bu] 2 -Memset Memstore UK v2 -.RE -.IP \[bu] 2 -\[dq]https://auth.cloud.ovh.net/v3\[dq] -.RS 2 -.IP \[bu] 2 -OVH -.RE -.IP \[bu] 2 -\[dq]https://authenticate.ain.net\[dq] -.RS 2 -.IP \[bu] 2 -Blomp Cloud Storage -.RE -.RE -.SS --swift-user-id -.PP -User ID to log in - optional - most swift systems use user and leave -this blank (v3 auth) (OS_USER_ID). -.PP -Properties: -.IP \[bu] 2 -Config: user_id -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_USER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-domain -.PP -User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) -.PP -Properties: -.IP \[bu] 2 -Config: domain -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_DOMAIN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-tenant -.PP +\ \[dq]true\[dq] env_auth> true User name to log in (OS_USERNAME). +user> API key or password (OS_PASSWORD). +key> Authentication URL for server (OS_AUTH_URL). +Choose a number from below, or type in your own value 1 / Rackspace US +\ \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] 2 / Rackspace UK +\ \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] 3 / Rackspace +v2 \ \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] 4 / Memset +Memstore UK \ \[dq]https://auth.storage.memset.com/v1.0\[dq] 5 / Memset +Memstore UK v2 \ \[dq]https://auth.storage.memset.com/v2.0\[dq] 6 / OVH +\ \[dq]https://auth.cloud.ovh.net/v3\[dq] 7 / Blomp Cloud Storage +\ \[dq]https://authenticate.ain.net\[dq] auth> User ID to log in - +optional - most swift systems use user and leave this blank (v3 auth) +(OS_USER_ID). +user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise -(OS_TENANT_NAME or OS_PROJECT_NAME). +(OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 +auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant +domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> +Region name - optional (OS_REGION_NAME) region> Storage URL - optional +(OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - +optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to +(1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) +Choose a number from below, or type in your own value 1 / Public +(default, choose this if not sure) \ \[dq]public\[dq] 2 / Internal (use +internal service net) \ \[dq]internal\[dq] 3 / Admin \ \[dq]admin\[dq] +endpoint_type> Remote config -------------------- [test] env_auth = true +user = key = auth = user_id = domain = tenant = tenant_id = +tenant_domain = region = storage_url = auth_token = auth_version = +endpoint_type = -------------------- y) Yes this is OK e) Edit this +remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync \[ga]/home/local/directory\[ga] to the remote container, deleting any +excess files in the container. + + rclone sync --interactive /home/local/directory remote:container + +### Configuration from an OpenStack credentials file + +An OpenStack credentials file typically looks something something +like this (without the comments) +\f[R] +.fi .PP +export OS_AUTH_URL=https://a.provider.net/v2.0 export +OS_TENANT_ID=ffffffffffffffffffffffffffffffff export +OS_TENANT_NAME=\[dq]1234567890123456\[dq] export +OS_USERNAME=\[dq]123abc567xy\[dq] echo \[dq]Please enter your OpenStack +Password: \[dq] read -sr OS_PASSWORD_INPUT export +OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME=\[dq]SBG1\[dq] if [ -z \[dq]$OS_REGION_NAME\[dq] +]; then unset OS_REGION_NAME; fi +.IP +.nf +\f[C] +The config file needs to look something like this where \[ga]$OS_USERNAME\[ga] +represents the value of the \[ga]OS_USERNAME\[ga] variable - \[ga]123abc567xy\[ga] in +the example above. +\f[R] +.fi +.PP +[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = +$OS_AUTH_URL tenant = $OS_TENANT_NAME +.IP +.nf +\f[C] +Note that you may (or may not) need to set \[ga]region\[ga] too - try without first. + +### Configuration from the environment + +If you prefer you can configure rclone to use swift using a standard +set of OpenStack environment variables. + +When you run through the config, make sure you choose \[ga]true\[ga] for +\[ga]env_auth\[ga] and leave everything else blank. + +rclone will then set any empty config parameters from the environment +using standard OpenStack environment variables. There is [a list of +the +variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) +in the docs for the swift library. + +### Using an alternate authentication method + +If your OpenStack installation uses a non-standard authentication method +that might not be yet supported by rclone or the underlying swift library, +you can authenticate externally (e.g. calling manually the \[ga]openstack\[ga] +commands to get a token). Then, you just need to pass the two +configuration variables \[ga]\[ga]auth_token\[ga]\[ga] and \[ga]\[ga]storage_url\[ga]\[ga]. +If they are both provided, the other variables are ignored. rclone will +not try to authenticate but instead assume it is already authenticated +and use these two variables to access the OpenStack installation. + +#### Using rclone without a config file + +You can use rclone with swift without a config file, if desired, like +this: +\f[R] +.fi +.PP +source openstack-credentials-file export +RCLONE_CONFIG_MYREMOTE_TYPE=swift export +RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: +.IP +.nf +\f[C] +### --fast-list + +This remote supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. + +### --update and --use-server-modtime + +As noted below, the modified time is stored on metadata on the object. It is +used by default for all operations that require checking the time a file was +last updated. It allows rclone to treat the remote more like a true filesystem, +but it is inefficient because it requires an extra API call to retrieve the +metadata. + +For many operations, the time the object was last uploaded to the remote is +sufficient to determine if it is \[dq]dirty\[dq]. By using \[ga]--update\[ga] along with +\[ga]--use-server-modtime\[ga], you can avoid the extra API call and simply upload +files whose local modtime is newer than the time it was last uploaded. + +### Modified time + +The modified time is stored as metadata on the object as +\[ga]X-Object-Meta-Mtime\[ga] as floating point since the epoch accurate to 1 +ns. + +This is a de facto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. + +### Restricted filename characters + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| NUL | 0x00 | \[u2400] | +| / | 0x2F | \[uFF0F] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). + +#### --swift-env-auth + +Get swift credentials from environment variables in standard OpenStack form. + Properties: -.IP \[bu] 2 -Config: tenant -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_TENANT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-tenant-id -.PP -Tenant ID - optional for v1 auth, this or tenant required otherwise -(OS_TENANT_ID). -.PP + +- Config: env_auth +- Env Var: RCLONE_SWIFT_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - \[dq]false\[dq] + - Enter swift credentials in the next step. + - \[dq]true\[dq] + - Get swift credentials from environment vars. + - Leave other fields blank if using this. + +#### --swift-user + +User name to log in (OS_USERNAME). + Properties: -.IP \[bu] 2 -Config: tenant_id -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_TENANT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-tenant-domain -.PP + +- Config: user +- Env Var: RCLONE_SWIFT_USER +- Type: string +- Required: false + +#### --swift-key + +API key or password (OS_PASSWORD). + +Properties: + +- Config: key +- Env Var: RCLONE_SWIFT_KEY +- Type: string +- Required: false + +#### --swift-auth + +Authentication URL for server (OS_AUTH_URL). + +Properties: + +- Config: auth +- Env Var: RCLONE_SWIFT_AUTH +- Type: string +- Required: false +- Examples: + - \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] + - Rackspace US + - \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] + - Rackspace UK + - \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] + - Rackspace v2 + - \[dq]https://auth.storage.memset.com/v1.0\[dq] + - Memset Memstore UK + - \[dq]https://auth.storage.memset.com/v2.0\[dq] + - Memset Memstore UK v2 + - \[dq]https://auth.cloud.ovh.net/v3\[dq] + - OVH + - \[dq]https://authenticate.ain.net\[dq] + - Blomp Cloud Storage + +#### --swift-user-id + +User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + +Properties: + +- Config: user_id +- Env Var: RCLONE_SWIFT_USER_ID +- Type: string +- Required: false + +#### --swift-domain + +User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + +Properties: + +- Config: domain +- Env Var: RCLONE_SWIFT_DOMAIN +- Type: string +- Required: false + +#### --swift-tenant + +Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME). + +Properties: + +- Config: tenant +- Env Var: RCLONE_SWIFT_TENANT +- Type: string +- Required: false + +#### --swift-tenant-id + +Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID). + +Properties: + +- Config: tenant_id +- Env Var: RCLONE_SWIFT_TENANT_ID +- Type: string +- Required: false + +#### --swift-tenant-domain + Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). -.PP + Properties: -.IP \[bu] 2 -Config: tenant_domain -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_TENANT_DOMAIN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-region -.PP + +- Config: tenant_domain +- Env Var: RCLONE_SWIFT_TENANT_DOMAIN +- Type: string +- Required: false + +#### --swift-region + Region name - optional (OS_REGION_NAME). -.PP + Properties: -.IP \[bu] 2 -Config: region -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_REGION -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-storage-url -.PP + +- Config: region +- Env Var: RCLONE_SWIFT_REGION +- Type: string +- Required: false + +#### --swift-storage-url + Storage URL - optional (OS_STORAGE_URL). -.PP + Properties: -.IP \[bu] 2 -Config: storage_url -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_STORAGE_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-auth-token -.PP + +- Config: storage_url +- Env Var: RCLONE_SWIFT_STORAGE_URL +- Type: string +- Required: false + +#### --swift-auth-token + Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). -.PP + Properties: -.IP \[bu] 2 -Config: auth_token -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_AUTH_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-application-credential-id -.PP + +- Config: auth_token +- Env Var: RCLONE_SWIFT_AUTH_TOKEN +- Type: string +- Required: false + +#### --swift-application-credential-id + Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). -.PP + Properties: -.IP \[bu] 2 -Config: application_credential_id -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-application-credential-name -.PP + +- Config: application_credential_id +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +- Type: string +- Required: false + +#### --swift-application-credential-name + Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). -.PP + Properties: -.IP \[bu] 2 -Config: application_credential_name -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-application-credential-secret -.PP + +- Config: application_credential_name +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +- Type: string +- Required: false + +#### --swift-application-credential-secret + Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). -.PP + Properties: -.IP \[bu] 2 -Config: application_credential_secret -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --swift-auth-version -.PP -AuthVersion - optional - set to (1,2,3) if your auth URL has no version -(ST_AUTH_VERSION). -.PP + +- Config: application_credential_secret +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +- Type: string +- Required: false + +#### --swift-auth-version + +AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION). + Properties: -.IP \[bu] 2 -Config: auth_version -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_AUTH_VERSION -.IP \[bu] 2 -Type: int -.IP \[bu] 2 -Default: 0 -.SS --swift-endpoint-type -.PP + +- Config: auth_version +- Env Var: RCLONE_SWIFT_AUTH_VERSION +- Type: int +- Default: 0 + +#### --swift-endpoint-type + Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). -.PP + Properties: -.IP \[bu] 2 -Config: endpoint_type -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_ENDPOINT_TYPE -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]public\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]public\[dq] -.RS 2 -.IP \[bu] 2 -Public (default, choose this if not sure) -.RE -.IP \[bu] 2 -\[dq]internal\[dq] -.RS 2 -.IP \[bu] 2 -Internal (use internal service net) -.RE -.IP \[bu] 2 -\[dq]admin\[dq] -.RS 2 -.IP \[bu] 2 -Admin -.RE -.RE -.SS --swift-storage-policy -.PP + +- Config: endpoint_type +- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE +- Type: string +- Default: \[dq]public\[dq] +- Examples: + - \[dq]public\[dq] + - Public (default, choose this if not sure) + - \[dq]internal\[dq] + - Internal (use internal service net) + - \[dq]admin\[dq] + - Admin + +#### --swift-storage-policy + The storage policy to use when creating a new container. -.PP -This applies the specified storage policy when creating a new container. -The policy cannot be changed afterwards. -The allowed configuration values and their meaning depend on your Swift -storage provider. -.PP + +This applies the specified storage policy when creating a new +container. The policy cannot be changed afterwards. The allowed +configuration values and their meaning depend on your Swift storage +provider. + Properties: -.IP \[bu] 2 -Config: storage_policy -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_STORAGE_POLICY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]\[dq] -.RS 2 -.IP \[bu] 2 -Default -.RE -.IP \[bu] 2 -\[dq]pcs\[dq] -.RS 2 -.IP \[bu] 2 -OVH Public Cloud Storage -.RE -.IP \[bu] 2 -\[dq]pca\[dq] -.RS 2 -.IP \[bu] 2 -OVH Public Cloud Archive -.RE -.RE -.SS Advanced options -.PP -Here are the Advanced options specific to swift (OpenStack Swift -(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). -.SS --swift-leave-parts-on-error -.PP + +- Config: storage_policy +- Env Var: RCLONE_SWIFT_STORAGE_POLICY +- Type: string +- Required: false +- Examples: + - \[dq]\[dq] + - Default + - \[dq]pcs\[dq] + - OVH Public Cloud Storage + - \[dq]pca\[dq] + - OVH Public Cloud Archive + +### Advanced options + +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). + +#### --swift-leave-parts-on-error + If true avoid calling abort upload on a failure. -.PP + It should be set to true for resuming uploads across different sessions. -.PP + Properties: -.IP \[bu] 2 -Config: leave_parts_on_error -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --swift-chunk-size -.PP + +- Config: leave_parts_on_error +- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR +- Type: bool +- Default: false + +#### --swift-chunk-size + Above this size files will be chunked into a _segments container. -.PP -Above this size files will be chunked into a _segments container. -The default for this is 5 GiB which is its maximum value. -.PP + +Above this size files will be chunked into a _segments container. The +default for this is 5 GiB which is its maximum value. + Properties: -.IP \[bu] 2 -Config: chunk_size -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_CHUNK_SIZE -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 5Gi -.SS --swift-no-chunk -.PP + +- Config: chunk_size +- Env Var: RCLONE_SWIFT_CHUNK_SIZE +- Type: SizeSuffix +- Default: 5Gi + +#### --swift-no-chunk + Don\[aq]t chunk files during streaming upload. -.PP -When doing streaming uploads (e.g. -using rcat or mount) setting this flag will cause the swift backend to -not upload chunked files. -.PP -This will limit the maximum upload size to 5 GiB. -However non chunked files are easier to deal with and have an MD5SUM. -.PP + +When doing streaming uploads (e.g. using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5 GiB. However non chunked +files are easier to deal with and have an MD5SUM. + Rclone will still chunk files bigger than chunk_size when doing normal copy operations. -.PP + Properties: -.IP \[bu] 2 -Config: no_chunk -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_NO_CHUNK -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --swift-no-large-objects -.PP + +- Config: no_chunk +- Env Var: RCLONE_SWIFT_NO_CHUNK +- Type: bool +- Default: false + +#### --swift-no-large-objects + Disable support for static and dynamic large objects -.PP -Swift cannot transparently store files bigger than 5 GiB. -There are two schemes for doing that, static or dynamic large objects, -and the API does not allow rclone to determine whether a file is a -static or dynamic large object without doing a HEAD on the object. -Since these need to be treated differently, this means rclone has to -issue HEAD requests for objects for example when reading checksums. -.PP -When \f[C]no_large_objects\f[R] is set, rclone will assume that there -are no static or dynamic large objects stored. -This means it can stop doing the extra HEAD calls which in turn -increases performance greatly especially when doing a swift to swift -transfer with \f[C]--checksum\f[R] set. -.PP -Setting this option implies \f[C]no_chunk\f[R] and also that no files -will be uploaded in chunks, so files bigger than 5 GiB will just fail on + +Swift cannot transparently store files bigger than 5 GiB. There are +two schemes for doing that, static or dynamic large objects, and the +API does not allow rclone to determine whether a file is a static or +dynamic large object without doing a HEAD on the object. Since these +need to be treated differently, this means rclone has to issue HEAD +requests for objects for example when reading checksums. + +When \[ga]no_large_objects\[ga] is set, rclone will assume that there are no +static or dynamic large objects stored. This means it can stop doing +the extra HEAD calls which in turn increases performance greatly +especially when doing a swift to swift transfer with \[ga]--checksum\[ga] set. + +Setting this option implies \[ga]no_chunk\[ga] and also that no files will be +uploaded in chunks, so files bigger than 5 GiB will just fail on upload. -.PP -If you set this option and there \f[I]are\f[R] static or dynamic large -objects, then this will give incorrect hashes for them. -Downloads will succeed, but other operations such as Remove and Copy -will fail. -.PP + +If you set this option and there *are* static or dynamic large objects, +then this will give incorrect hashes for them. Downloads will succeed, +but other operations such as Remove and Copy will fail. + + Properties: -.IP \[bu] 2 -Config: no_large_objects -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --swift-encoding -.PP + +- Config: no_large_objects +- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS +- Type: bool +- Default: false + +#### --swift-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_SWIFT_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,InvalidUtf8 -.SS Limitations -.PP + +- Config: encoding +- Env Var: RCLONE_SWIFT_ENCODING +- Type: MultiEncoder +- Default: Slash,InvalidUtf8 + + + +## Limitations + The Swift API doesn\[aq]t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the MD5SUM for these. -.SS Troubleshooting -.SS Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request -.PP + +## Troubleshooting + +### Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request + Due to an oddity of the underlying swift library, it gives a \[dq]Bad Request\[dq] error rather than a more sensible error when the authentication fails for Swift. -.PP -So this most likely means your username / password is wrong. -You can investigate further with the \f[C]--dump-bodies\f[R] flag. -.PP + +So this most likely means your username / password is wrong. You can +investigate further with the \[ga]--dump-bodies\[ga] flag. + This may also be caused by specifying the region when you shouldn\[aq]t -have (e.g. -OVH). -.SS Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token -.PP +have (e.g. OVH). + +### Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token + This is most likely caused by forgetting to specify your tenant when setting up a swift remote. -.SS OVH Cloud Archive -.PP -To use rclone with OVH cloud archive, first use \f[C]rclone config\f[R] -to set up a \f[C]swift\f[R] backend with OVH, choosing \f[C]pca\f[R] as -the \f[C]storage_policy\f[R]. -.SS Uploading Objects -.PP -Uploading objects to OVH cloud archive is no different to object -storage, you just simply run the command you like (move, copy or sync) -to upload the objects. -Once uploaded the objects will show in a \[dq]Frozen\[dq] state within -the OVH control panel. -.SS Retrieving Objects -.PP -To retrieve objects use \f[C]rclone copy\f[R] as normal. -If the objects are in a frozen state then rclone will ask for them all -to be unfrozen and it will wait at the end of the output with a message -like the following: -.PP -\f[C]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\f[R] -.PP + +## OVH Cloud Archive + +To use rclone with OVH cloud archive, first use \[ga]rclone config\[ga] to set up a \[ga]swift\[ga] backend with OVH, choosing \[ga]pca\[ga] as the \[ga]storage_policy\[ga]. + +### Uploading Objects + +Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a \[dq]Frozen\[dq] state within the OVH control panel. + +### Retrieving Objects + +To retrieve objects use \[ga]rclone copy\[ga] as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: + +\[ga]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\[ga] + Rclone will wait for the time specified then retry the copy. -.SH pCloud -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -The initial setup for pCloud involves getting a token from pCloud which -you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP + +# pCloud + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +The initial setup for pCloud involves getting a token from pCloud which you +need to do in your browser. \[ga]rclone config\[ga] walks you through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Pcloud - \[rs] \[dq]pcloud\[dq] -[snip] -Storage> pcloud -Pcloud App Client Id - leave blank normally. -client_id> -Pcloud App Client Secret - leave blank normally. -client_secret> -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = -client_secret = -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Pcloud +\ \[dq]pcloud\[dq] [snip] Storage> pcloud Pcloud App Client Id - leave +blank normally. +client_id> Pcloud App Client Secret - leave blank normally. +client_secret> Remote config Use web browser to automatically +authenticate rclone with remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... +Got code -------------------- [remote] client_id = client_secret = token += +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from pCloud. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP +token as returned from pCloud. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use \[ga]rclone\[ga] like this, + List directories in top level of your pCloud -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your pCloud -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to a pCloud directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modified time and hashes -.PP + + rclone copy /home/source remote:backup + +### Modified time and hashes ### + pCloud allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -In order to set a Modification time pCloud requires the object be -re-uploaded. -.PP -pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and -SHA256 hashes in the EU region, so you can use the \f[C]--checksum\f[R] -flag. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Deleting files -.PP -Deleted files will be moved to the trash. -Your subscription level will determine how long items stay in the trash. -\f[C]rclone cleanup\f[R] can be used to empty the trash. -.SS Emptying the trash -.PP -Due to an API limitation, the \f[C]rclone cleanup\f[R] command will only -work if you set your username and password in the advanced options for -this backend. -Since we generally want to avoid storing user passwords in the rclone -config file, we advise you to only set this up if you need the -\f[C]rclone cleanup\f[R] command to work. -.SS Root folder ID -.PP -You can set the \f[C]root_folder_id\f[R] for rclone. -This is the directory (identified by its \f[C]Folder ID\f[R]) that -rclone considers to be the root of your pCloud drive. -.PP -Normally you will leave this blank and rclone will determine the correct -root to use itself. -.PP +second. These will be used to detect whether objects need syncing or +not. In order to set a Modification time pCloud requires the object +be re-uploaded. + +pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 +hashes in the EU region, so you can use the \[ga]--checksum\[ga] flag. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Deleting files + +Deleted files will be moved to the trash. Your subscription level +will determine how long items stay in the trash. \[ga]rclone cleanup\[ga] can +be used to empty the trash. + +### Emptying the trash + +Due to an API limitation, the \[ga]rclone cleanup\[ga] command will only work if you +set your username and password in the advanced options for this backend. +Since we generally want to avoid storing user passwords in the rclone config +file, we advise you to only set this up if you need the \[ga]rclone cleanup\[ga] command to work. + +### Root folder ID + +You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory +(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root +of your pCloud drive. + +Normally you will leave this blank and rclone will determine the +correct root to use itself. + However you can set this to restrict rclone to a specific folder hierarchy. -.PP -In order to do this you will have to find the \f[C]Folder ID\f[R] of the -directory you wish rclone to display. -This will be the \f[C]folder\f[R] field of the URL when you open the -relevant folder in the pCloud web interface. -.PP + +In order to do this you will have to find the \[ga]Folder ID\[ga] of the +directory you wish rclone to display. This will be the \[ga]folder\[ga] field +of the URL when you open the relevant folder in the pCloud web +interface. + So if the folder you want rclone to use has a URL which looks like -\f[C]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\f[R] -in the browser, then you use \f[C]5xxxxxxxx8\f[R] as the -\f[C]root_folder_id\f[R] in the config. -.SS Standard options -.PP +\[ga]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\[ga] +in the browser, then you use \[ga]5xxxxxxxx8\[ga] as +the \[ga]root_folder_id\[ga] in the config. + + +### Standard options + Here are the Standard options specific to pcloud (Pcloud). -.SS --pcloud-client-id -.PP + +#### --pcloud-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pcloud-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_PCLOUD_CLIENT_ID +- Type: string +- Required: false + +#### --pcloud-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP + +- Config: client_secret +- Env Var: RCLONE_PCLOUD_CLIENT_SECRET +- Type: string +- Required: false + +### Advanced options + Here are the Advanced options specific to pcloud (Pcloud). -.SS --pcloud-token -.PP + +#### --pcloud-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pcloud-auth-url -.PP + +- Config: token +- Env Var: RCLONE_PCLOUD_TOKEN +- Type: string +- Required: false + +#### --pcloud-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pcloud-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_PCLOUD_AUTH_URL +- Type: string +- Required: false + +#### --pcloud-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pcloud-encoding -.PP + +- Config: token_url +- Env Var: RCLONE_PCLOUD_TOKEN_URL +- Type: string +- Required: false + +#### --pcloud-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS --pcloud-root-folder-id -.PP + +- Config: encoding +- Env Var: RCLONE_PCLOUD_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + +#### --pcloud-root-folder-id + Fill in for rclone to use a non root folder as its starting point. -.PP + Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]d0\[dq] -.SS --pcloud-hostname -.PP + +- Config: root_folder_id +- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID +- Type: string +- Default: \[dq]d0\[dq] + +#### --pcloud-hostname + Hostname to connect to. -.PP + This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize. -.PP + + Properties: -.IP \[bu] 2 -Config: hostname -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_HOSTNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: \[dq]api.pcloud.com\[dq] -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]api.pcloud.com\[dq] -.RS 2 -.IP \[bu] 2 -Original/US region -.RE -.IP \[bu] 2 -\[dq]eapi.pcloud.com\[dq] -.RS 2 -.IP \[bu] 2 -EU region -.RE -.RE -.SS --pcloud-username -.PP + +- Config: hostname +- Env Var: RCLONE_PCLOUD_HOSTNAME +- Type: string +- Default: \[dq]api.pcloud.com\[dq] +- Examples: + - \[dq]api.pcloud.com\[dq] + - Original/US region + - \[dq]eapi.pcloud.com\[dq] + - EU region + +#### --pcloud-username + Your pcloud username. -.PP -This is only required when you want to use the cleanup command. -Due to a bug in the pcloud API the required API does not support OAuth -authentication so we have to rely on user password authentication for -it. -.PP + +This is only required when you want to use the cleanup command. Due to a bug +in the pcloud API the required API does not support OAuth authentication so +we have to rely on user password authentication for it. + Properties: -.IP \[bu] 2 -Config: username -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_USERNAME -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pcloud-password -.PP + +- Config: username +- Env Var: RCLONE_PCLOUD_USERNAME +- Type: string +- Required: false + +#### --pcloud-password + Your pcloud password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: password -.IP \[bu] 2 -Env Var: RCLONE_PCLOUD_PASSWORD -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SH PikPak -.PP -PikPak is a private cloud drive (https://mypikpak.com/). -.PP -Paths are specified as \f[C]remote:path\f[R], and may be as deep as -required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP + +- Config: password +- Env Var: RCLONE_PCLOUD_PASSWORD +- Type: string +- Required: false + + + +# PikPak + +PikPak is [a private cloud drive](https://mypikpak.com/). + +Paths are specified as \[ga]remote:path\[ga], and may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + Here is an example of making a remote for PikPak. -.PP + First run: -.IP -.nf -\f[C] - rclone config + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n - +n) New remote s) Set configuration password q) Quit config n/s/q> n +.PP Enter name for new remote. name> remote - +.PP Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -XX / PikPak - \[rs] (pikpak) -Storage> XX - +XX / PikPak \ (pikpak) Storage> XX +.PP Option user. Pikpak username. Enter a value. user> USERNAME - +.PP Option pass. Pikpak password. Choose an alternative below. -y) Yes, type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: - +y) Yes, type in my own password g) Generate random password y/g> y Enter +the password: password: Confirm the password: password: +.PP Edit advanced config? -y) Yes -n) No (default) -y/n> - +y) Yes n) No (default) y/n> +.PP Configuration complete. -Options: -- type: pikpak -- user: USERNAME -- pass: *** ENCRYPTED *** -- token: {\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} +Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - +token: +{\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} Keep this \[dq]remote\[dq] remote? -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -\f[R] -.fi -.SS Standard options -.PP +.IP +.nf +\f[C] + +### Standard options + Here are the Standard options specific to pikpak (PikPak). -.SS --pikpak-user -.PP + +#### --pikpak-user + Pikpak username. -.PP + Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --pikpak-pass -.PP + +- Config: user +- Env Var: RCLONE_PIKPAK_USER +- Type: string +- Required: true + +#### --pikpak-pass + Pikpak password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS Advanced options -.PP + +- Config: pass +- Env Var: RCLONE_PIKPAK_PASS +- Type: string +- Required: true + +### Advanced options + Here are the Advanced options specific to pikpak (PikPak). -.SS --pikpak-client-id -.PP + +#### --pikpak-client-id + OAuth Client Id. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-client-secret -.PP + +- Config: client_id +- Env Var: RCLONE_PIKPAK_CLIENT_ID +- Type: string +- Required: false + +#### --pikpak-client-secret + OAuth Client Secret. -.PP + Leave blank normally. -.PP + Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-token -.PP + +- Config: client_secret +- Env Var: RCLONE_PIKPAK_CLIENT_SECRET +- Type: string +- Required: false + +#### --pikpak-token + OAuth Access Token as a JSON blob. -.PP + Properties: -.IP \[bu] 2 -Config: token -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-auth-url -.PP + +- Config: token +- Env Var: RCLONE_PIKPAK_TOKEN +- Type: string +- Required: false + +#### --pikpak-auth-url + Auth server URL. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-token-url -.PP + +- Config: auth_url +- Env Var: RCLONE_PIKPAK_AUTH_URL +- Type: string +- Required: false + +#### --pikpak-token-url + Token server url. -.PP + Leave blank to use the provider defaults. -.PP + Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-root-folder-id -.PP + +- Config: token_url +- Env Var: RCLONE_PIKPAK_TOKEN_URL +- Type: string +- Required: false + +#### --pikpak-root-folder-id + ID of the root folder. Leave blank normally. -.PP + Fill in for rclone to use a non root folder as its starting point. -.PP + + Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --pikpak-use-trash -.PP + +- Config: root_folder_id +- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID +- Type: string +- Required: false + +#### --pikpak-use-trash + Send files to the trash instead of deleting permanently. -.PP + Defaults to true, namely sending files to the trash. -Use \f[C]--pikpak-use-trash=false\f[R] to delete files permanently -instead. -.PP +Use \[ga]--pikpak-use-trash=false\[ga] to delete files permanently instead. + Properties: -.IP \[bu] 2 -Config: use_trash -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_USE_TRASH -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: true -.SS --pikpak-trashed-only -.PP + +- Config: use_trash +- Env Var: RCLONE_PIKPAK_USE_TRASH +- Type: bool +- Default: true + +#### --pikpak-trashed-only + Only show files that are in the trash. -.PP + This will show trashed files in their original directory structure. -.PP + Properties: -.IP \[bu] 2 -Config: trashed_only -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_TRASHED_ONLY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --pikpak-hash-memory-limit -.PP -Files bigger than this will be cached on disk to calculate hash if -required. -.PP + +- Config: trashed_only +- Env Var: RCLONE_PIKPAK_TRASHED_ONLY +- Type: bool +- Default: false + +#### --pikpak-hash-memory-limit + +Files bigger than this will be cached on disk to calculate hash if required. + Properties: -.IP \[bu] 2 -Config: hash_memory_limit -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 10Mi -.SS --pikpak-encoding -.PP + +- Config: hash_memory_limit +- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT +- Type: SizeSuffix +- Default: 10Mi + +#### --pikpak-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_PIKPAK_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: -Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot -.SS Backend commands -.PP + +- Config: encoding +- Env Var: RCLONE_PIKPAK_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot + +## Backend commands + Here are the commands specific to the pikpak backend. -.PP + Run them with -.IP -.nf -\f[C] -rclone backend COMMAND remote: -\f[R] -.fi -.PP + + rclone backend COMMAND remote: + The help below will explain what arguments each command takes. -.PP -See the backend (https://rclone.org/commands/rclone_backend/) command -for more info on how to pass options and arguments. -.PP + +See the [backend](https://rclone.org/commands/rclone_backend/) command for more +info on how to pass options and arguments. + These can be run on a running backend using the rc command -backend/command (https://rclone.org/rc/#backend-command). -.SS addurl -.PP +[backend/command](https://rclone.org/rc/#backend-command). + +### addurl + Add offline download task for url -.IP -.nf -\f[C] -rclone backend addurl remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend addurl remote: [options] [+] + This command adds offline download task for url. -.PP + Usage: -.IP -.nf -\f[C] -rclone backend addurl pikpak:dirpath url -\f[R] -.fi -.PP -Downloads will be stored in \[aq]dirpath\[aq]. -If \[aq]dirpath\[aq] is invalid, download will fallback to default -\[aq]My Pack\[aq] folder. -.SS decompress -.PP + + rclone backend addurl pikpak:dirpath url + +Downloads will be stored in \[aq]dirpath\[aq]. If \[aq]dirpath\[aq] is invalid, +download will fallback to default \[aq]My Pack\[aq] folder. + + +### decompress + Request decompress of a file/files in a folder -.IP -.nf -\f[C] -rclone backend decompress remote: [options] [+] -\f[R] -.fi -.PP + + rclone backend decompress remote: [options] [+] + This command requests decompress of file/files in a folder. -.PP + Usage: -.IP -.nf -\f[C] -rclone backend decompress pikpak:dirpath {filename} -o password=password -rclone backend decompress pikpak:dirpath {filename} -o delete-src-file -\f[R] -.fi -.PP -An optional argument \[aq]filename\[aq] can be specified for a file -located in \[aq]pikpak:dirpath\[aq]. -You may want to pass \[aq]-o password=password\[aq] for a -password-protected files. -Also, pass \[aq]-o delete-src-file\[aq] to delete source files after -decompression finished. -.PP + + rclone backend decompress pikpak:dirpath {filename} -o password=password + rclone backend decompress pikpak:dirpath {filename} -o delete-src-file + +An optional argument \[aq]filename\[aq] can be specified for a file located in +\[aq]pikpak:dirpath\[aq]. You may want to pass \[aq]-o password=password\[aq] for a +password-protected files. Also, pass \[aq]-o delete-src-file\[aq] to delete +source files after decompression finished. + Result: -.IP -.nf -\f[C] -{ - \[dq]Decompressed\[dq]: 17, - \[dq]SourceDeleted\[dq]: 0, - \[dq]Errors\[dq]: 0 -} -\f[R] -.fi -.SS Limitations -.SS Hashes -.PP -PikPak supports MD5 hash, but sometimes given empty especially for -user-uploaded files. -.SS Deleted files -.PP -Deleted files will still be visible with \f[C]--pikpak-trashed-only\f[R] -even after the trash emptied. -This goes away after few days. -.SH premiumize.me -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -Paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -The initial setup for premiumize.me (https://premiumize.me/) involves -getting a token from premiumize.me which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / premiumize.me - \[rs] \[dq]premiumizeme\[dq] -[snip] -Storage> premiumizeme -** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -type = premiumizeme -token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> + { + \[dq]Decompressed\[dq]: 17, + \[dq]SourceDeleted\[dq]: 0, + \[dq]Errors\[dq]: 0 + } + + + + +## Limitations ## + +### Hashes ### + +PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files. + +### Deleted files ### + +Deleted files will still be visible with \[ga]--pikpak-trashed-only\[ga] even after the trash emptied. This goes away after few days. + +# premiumize.me + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you +need to do in your browser. \[ga]rclone config\[ga] walks you through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: \f[R] .fi .PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +premiumize.me \ \[dq]premiumizeme\[dq] [snip] Storage> premiumizeme ** +See help for premiumizeme backend at: https://rclone.org/premiumizeme/ +** .PP +Remote config Use web browser to automatically authenticate rclone with +remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... +Got code -------------------- [remote] type = premiumizeme token = +{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from premiumize.me. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP +token as returned from premiumize.me. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use \[ga]rclone\[ga] like this, + List directories in top level of your premiumize.me -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + List all the files in your premiumize.me -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP + + rclone ls remote: + To copy a local directory to an premiumize.me directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modified time and hashes -.PP + + rclone copy /home/source remote:backup + +### Modified time and hashes + premiumize.me does not support modification times or hashes, therefore -syncing will default to \f[C]--size-only\f[R] checking. -Note that using \f[C]--update\f[R] will work. -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Standard options -.PP +syncing will default to \[ga]--size-only\[ga] checking. Note that using +\[ga]--update\[ga] will work. + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | +| \[dq] | 0x22 | \[uFF02] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + Here are the Standard options specific to premiumizeme (premiumize.me). -.SS --premiumizeme-api-key -.PP + +#### --premiumizeme-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +- Type: string +- Required: false + +#### --premiumizeme-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +- Type: string +- Required: false + +#### --premiumizeme-api-key + API Key. -.PP + This is not normally used - use oauth instead. -.PP + + Properties: -.IP \[bu] 2 -Config: api_key -.IP \[bu] 2 -Env Var: RCLONE_PREMIUMIZEME_API_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP + +- Config: api_key +- Env Var: RCLONE_PREMIUMIZEME_API_KEY +- Type: string +- Required: false + +### Advanced options + Here are the Advanced options specific to premiumizeme (premiumize.me). -.SS --premiumizeme-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +#### --premiumizeme-token + +OAuth Access Token as a JSON blob. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_PREMIUMIZEME_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS Limitations -.PP -Note that premiumize.me is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -premiumize.me file names can\[aq]t have the \f[C]\[rs]\f[R] or -\f[C]\[dq]\f[R] characters in. + +- Config: token +- Env Var: RCLONE_PREMIUMIZEME_TOKEN +- Type: string +- Required: false + +#### --premiumizeme-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +- Type: string +- Required: false + +#### --premiumizeme-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +- Type: string +- Required: false + +#### --premiumizeme-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PREMIUMIZEME_ENCODING +- Type: MultiEncoder +- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +Note that premiumize.me is case insensitive so you can\[aq]t have a file called +\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. + +premiumize.me file names can\[aq]t have the \[ga]\[rs]\[ga] or \[ga]\[dq]\[ga] characters in. rclone maps these to and from an identical looking unicode equivalents -\f[C]\[uFF3C]\f[R] and \f[C]\[uFF02]\f[R] -.PP +\[ga]\[uFF3C]\[ga] and \[ga]\[uFF02]\[ga] + premiumize.me only supports filenames up to 255 characters in length. -.SH put.io -.PP -Paths are specified as \f[C]remote:path\f[R] -.PP -put.io paths may be as deep as required, e.g. -\f[C]remote:directory/subdirectory\f[R]. -.SS Configuration -.PP -The initial setup for put.io involves getting a token from put.io which -you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP + +# Proton Drive + +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. + +This is an rclone backend for Proton Drive which supports the file transfer +features of Proton Drive using the same client-side encryption. + +Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client +source code and observing the Proton Drive traffic in the browser. + +**NB** This backend is currently in Beta. It is believed to be correct +and all the integration tests pass. However the Proton Drive protocol +has evolved over time there may be accounts it is not compatible +with. Please [post on the rclone forum](https://forum.rclone.org/) if +you find an incompatibility. + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configurations + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + This will guide you through an interactive setup process: -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> putio -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Put.io - \[rs] \[dq]putio\[dq] -[snip] -Storage> putio -** See help for putio backend at: https://rclone.org/putio/ ** - -Remote config -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[putio] -type = putio -token = {\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -putio putio - -e) Edit existing remote -n) New remote -d) Delete remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -e/n/d/r/c/s/q> q -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from put.io if using web browser to automatically -authenticate. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall, or use -manual mode. -.PP -You can then use it like this, -.PP -List directories in top level of your put.io +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Proton +Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name +user> you\[at]protonmail.com Password. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank y/g/n> y Enter the password: password: +Confirm the password: password: Option 2fa. +2FA code (if the account requires one) Enter a value. +Press Enter to leave empty. +2fa> 123456 Remote config -------------------- [remote] type = +protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y .IP .nf \f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your put.io -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to a put.io directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Advanced options -.PP -Here are the Advanced options specific to putio (Put.io). -.SS --putio-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the +credentials in \[ga]rclone\[ga] will fail. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your Proton Drive + + rclone lsd remote: + +List all the files in your Proton Drive + + rclone ls remote: + +To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + +### Modified time + +Proton Drive Bridge does not support updating modification times yet. + +### Restricted filename characters + +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) + +### Duplicated files + +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not +be overwritten. + +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) + +Please set your mailbox password in the advanced config section. + +### Caching + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won\[cq]t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + + +### Standard options + +Here are the Standard options specific to protondrive (Proton Drive). + +#### --protondrive-username + +The username of your proton account + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_PUTIO_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -.SS Limitations + +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true + +#### --protondrive-password + +The password of your proton account. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true + +#### --protondrive-2fa + +The 2FA code + +The value can also be provided with --protondrive-2fa=000000 + +The 2FA code of your proton drive account if the account is set up with +two-factor authentication + +Properties: + +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to protondrive (Proton Drive). + +#### --protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +#### --protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +#### --protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + +#### --protondrive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + +#### --protondrive-original-file-size + +Return the file size before encryption + +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original content +size, will fail to operate properly + +Properties: + +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true + +#### --protondrive-app-version + +The app version string + +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with every +API request. + +Properties: + +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] + +#### --protondrive-replace-existing-draft + +Create a new revision when filename conflict is detected + +When a file upload is cancelled or failed before completion, a draft will be +created and the subsequent upload of the same file to the same location will be +reported as a conflict. + +The value can also be set by --protondrive-replace-existing-draft=true + +If the option is set to true, the draft will be replaced and then the upload +operation will restart. If there are other clients also uploading at the same +file location at the same time, the behavior is currently unknown. Need to set +to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually this means a +file is being uploaded at another client, or, there was a failed upload attempt\[dq] +will be returned, and no upload will happen. + +Properties: + +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false + +#### --protondrive-enable-caching + +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, +as the current implementation doesn\[aq]t update or clear the cache when there are +external changes. + +The files and folders on ProtonDrive are represented as links with keyrings, +which can be cached to improve performance and be friendly to the API server. + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won\[cq]t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + + + +## Limitations + +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! + +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn\[aq]t official +documentation available. + +# put.io + +Paths are specified as \[ga]remote:path\[ga] + +put.io paths may be as deep as required, e.g. +\[ga]remote:directory/subdirectory\[ga]. + +## Configuration + +The initial setup for put.io involves getting a token from put.io +which you need to do in your browser. \[ga]rclone config\[ga] walks you +through it. + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi .PP -put.io has rate limiting. -When you hit a limit, rclone automatically retries after waiting the -amount of time requested by the server. +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> putio Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / Put.io +\ \[dq]putio\[dq] [snip] Storage> putio ** See help for putio backend +at: https://rclone.org/putio/ ** .PP +Remote config Use web browser to automatically authenticate rclone with +remote? +* Say Y if the machine running rclone has a web browser you can use * +Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. +If Y failed, try N. +y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to +the following link: http://127.0.0.1:53682/auth Log in and authorize +rclone for access Waiting for code... +Got code -------------------- [putio] type = putio token = +{\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y Current remotes: +.PP +Name Type ==== ==== putio putio +.IP "e)" 3 +Edit existing remote +.IP "f)" 3 +New remote +.IP "g)" 3 +Delete remote +.IP "h)" 3 +Rename remote +.IP "i)" 3 +Copy remote +.IP "j)" 3 +Set configuration password +.IP "k)" 3 +Quit config e/n/d/r/c/s/q> q +.IP +.nf +\f[C] +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from put.io if using web browser to automatically +authenticate. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and this +it may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your put.io + + rclone lsd remote: + +List all the files in your put.io + + rclone ls remote: + +To copy a local directory to a put.io directory called backup + + rclone copy /home/source remote:backup + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| \[rs] | 0x5C | \[uFF3C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + + +### Standard options + +Here are the Standard options specific to putio (Put.io). + +#### --putio-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PUTIO_CLIENT_ID +- Type: string +- Required: false + +#### --putio-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PUTIO_CLIENT_SECRET +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to putio (Put.io). + +#### --putio-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PUTIO_TOKEN +- Type: string +- Required: false + +#### --putio-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PUTIO_AUTH_URL +- Type: string +- Required: false + +#### --putio-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PUTIO_TOKEN_URL +- Type: string +- Required: false + +#### --putio-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PUTIO_ENCODING +- Type: MultiEncoder +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + + + +## Limitations + +put.io has rate limiting. When you hit a limit, rclone automatically +retries after waiting the amount of time requested by the server. + If you want to avoid ever hitting these limits, you may use the -\f[C]--tpslimit\f[R] flag with a low number. -Note that the imposed limits may be different for different operations, -and may change over time. -.SH Seafile +\[ga]--tpslimit\[ga] flag with a low number. Note that the imposed limits +may be different for different operations, and may change over time. + +# Proton Drive + +[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault + for your files that protects your data. + +This is an rclone backend for Proton Drive which supports the file transfer +features of Proton Drive using the same client-side encryption. + +Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client +source code and observing the Proton Drive traffic in the browser. + +**NB** This backend is currently in Beta. It is believed to be correct +and all the integration tests pass. However the Proton Drive protocol +has evolved over time there may be accounts it is not compatible +with. Please [post on the rclone forum](https://forum.rclone.org/) if +you find an incompatibility. + +Paths are specified as \[ga]remote:path\[ga] + +Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. + +## Configurations + +Here is an example of how to make a remote called \[ga]remote\[ga]. First run: + + rclone config + +This will guide you through an interactive setup process: +\f[R] +.fi .PP -This is a backend for the Seafile (https://www.seafile.com/) storage -service: - It works with both the free community edition or the -professional edition. +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / Proton +Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name +user> you\[at]protonmail.com Password. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank y/g/n> y Enter the password: password: +Confirm the password: password: Option 2fa. +2FA code (if the account requires one) Enter a value. +Press Enter to leave empty. +2fa> 123456 Remote config -------------------- [remote] type = +protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** +-------------------- y) Yes this is OK e) Edit this remote d) Delete +this remote y/e/d> y +.IP +.nf +\f[C] +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the +credentials in \[ga]rclone\[ga] will fail. + +Once configured you can then use \[ga]rclone\[ga] like this, + +List directories in top level of your Proton Drive + + rclone lsd remote: + +List all the files in your Proton Drive + + rclone ls remote: + +To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + +### Modified time + +Proton Drive Bridge does not support updating modification times yet. + +### Restricted filename characters + +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) + +### Duplicated files + +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not +be overwritten. + +### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) + +Please set your mailbox password in the advanced config section. + +### Caching + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won\[cq]t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + + +### Standard options + +Here are the Standard options specific to protondrive (Proton Drive). + +#### --protondrive-username + +The username of your proton account + +Properties: + +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true + +#### --protondrive-password + +The password of your proton account. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true + +#### --protondrive-2fa + +The 2FA code + +The value can also be provided with --protondrive-2fa=000000 + +The 2FA code of your proton drive account if the account is set up with +two-factor authentication + +Properties: + +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to protondrive (Proton Drive). + +#### --protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +#### --protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +#### --protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +#### --protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + +#### --protondrive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + +#### --protondrive-original-file-size + +Return the file size before encryption + +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original content +size, will fail to operate properly + +Properties: + +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true + +#### --protondrive-app-version + +The app version string + +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with every +API request. + +Properties: + +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] + +#### --protondrive-replace-existing-draft + +Create a new revision when filename conflict is detected + +When a file upload is cancelled or failed before completion, a draft will be +created and the subsequent upload of the same file to the same location will be +reported as a conflict. + +The value can also be set by --protondrive-replace-existing-draft=true + +If the option is set to true, the draft will be replaced and then the upload +operation will restart. If there are other clients also uploading at the same +file location at the same time, the behavior is currently unknown. Need to set +to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually this means a +file is being uploaded at another client, or, there was a failed upload attempt\[dq] +will be returned, and no upload will happen. + +Properties: + +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false + +#### --protondrive-enable-caching + +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, +as the current implementation doesn\[aq]t update or clear the cache when there are +external changes. + +The files and folders on ProtonDrive are represented as links with keyrings, +which can be cached to improve performance and be friendly to the API server. + +The cache is currently built for the case when the rclone is the only instance +performing operations to the mount point. The event system, which is the proton +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won\[cq]t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + + + +## Limitations + +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +fork of the [official repo](https://github.com/ProtonMail/go-proton-api). + +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! + +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn\[aq]t official +documentation available. + +# Seafile + +This is a backend for the [Seafile](https://www.seafile.com/) storage service: +- It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. -- It supports 2FA enabled users - Using a Library API Token is -\f[B]not\f[R] supported -.SS Configuration -.PP -There are two distinct modes you can setup your remote: - you point your -remote to the \f[B]root of the server\f[R], meaning you don\[aq]t -specify a library during the configuration: Paths are specified as -\f[C]remote:library\f[R]. -You may put subdirectories in too, e.g. -\f[C]remote:library/path/to/dir\f[R]. +- It supports 2FA enabled users +- Using a Library API Token is **not** supported + +## Configuration + +There are two distinct modes you can setup your remote: +- you point your remote to the **root of the server**, meaning you don\[aq]t specify a library during the configuration: +Paths are specified as \[ga]remote:library\[ga]. You may put subdirectories in too, e.g. \[ga]remote:library/path/to/dir\[ga]. - you point your remote to a specific library during the configuration: -Paths are specified as \f[C]remote:path/to/dir\f[R]. -\f[B]This is the recommended mode when using encrypted libraries\f[R]. -(\f[I]This mode is possibly slightly faster than the root mode\f[R]) -.SS Configuration in root mode -.PP -Here is an example of making a seafile configuration for a user with -\f[B]no\f[R] two-factor authentication. -First run -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process. -To authenticate you will need the URL of your server, your email (or -username) and your password. -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> seafile -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Seafile - \[rs] \[dq]seafile\[dq] -[snip] -Storage> seafile -** See help for seafile backend at: https://rclone.org/seafile/ ** +Paths are specified as \[ga]remote:path/to/dir\[ga]. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) -URL of seafile host to connect to -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Connect to cloud.seafile.com - \[rs] \[dq]https://cloud.seafile.com/\[dq] -url> http://my.seafile.server/ -User name (usually email address) -Enter a string value. Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com -Password -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g> y -Enter the password: -password: -Confirm the password: -password: -Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) -Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). -2fa> false -Name of the library. Leave blank to access all non-encrypted libraries. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -library> -Library password (for encrypted libraries only). Leave blank if you pass it through the command line. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g/n> n -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config -Two-factor authentication is not enabled on this account. --------------------- -[seafile] -type = seafile -url = http://my.seafile.server/ -user = me\[at]example.com -pass = *** ENCRYPTED *** -2fa = false --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y +### Configuration in root mode + +Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run + + rclone config + +This will guide you through an interactive setup process. To authenticate +you will need the URL of your server, your email (or username) and your password. \f[R] .fi .PP -This remote is called \f[C]seafile\f[R]. -It\[aq]s pointing to the root of your seafile server and can now be used -like this: +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> seafile Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for +seafile backend at: https://rclone.org/seafile/ ** .PP +URL of seafile host to connect to Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Connect to +cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> +http://my.seafile.server/ User name (usually email address) Enter a +string value. +Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com Password y) Yes type in my own password g) +Generate random password n) No leave this optional password blank +(default) y/g> y Enter the password: password: Confirm the password: +password: Two-factor authentication (\[aq]true\[aq] if the account has +2FA enabled) Enter a boolean value (true or false). +Press Enter for the default (\[dq]false\[dq]). +2fa> false Name of the library. +Leave blank to access all non-encrypted libraries. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +library> Library password (for encrypted libraries only). +Leave blank if you pass it through the command line. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank (default) y/g/n> n Edit advanced config? +(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor +authentication is not enabled on this account. +-------------------- [seafile] type = seafile url = +http://my.seafile.server/ user = me\[at]example.com pass = *** ENCRYPTED +*** 2fa = false -------------------- y) Yes this is OK (default) e) Edit +this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +This remote is called \[ga]seafile\[ga]. It\[aq]s pointing to the root of your seafile server and can now be used like this: + See all libraries -.IP -.nf -\f[C] -rclone lsd seafile: -\f[R] -.fi -.PP -Create a new library -.IP -.nf -\f[C] -rclone mkdir seafile:library -\f[R] -.fi -.PP -List the contents of a library -.IP -.nf -\f[C] -rclone ls seafile:library -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any -excess files in the library. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory seafile:library -\f[R] -.fi -.SS Configuration in library mode -.PP -Here\[aq]s an example of a configuration in library mode with a user -that has the two-factor authentication enabled. -Your 2FA code will be asked at the end of the configuration, and will -attempt to authenticate you: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> seafile -Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value -[snip] -XX / Seafile - \[rs] \[dq]seafile\[dq] -[snip] -Storage> seafile -** See help for seafile backend at: https://rclone.org/seafile/ ** -URL of seafile host to connect to -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value - 1 / Connect to cloud.seafile.com - \[rs] \[dq]https://cloud.seafile.com/\[dq] -url> http://my.seafile.server/ -User name (usually email address) -Enter a string value. Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com -Password -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g> y -Enter the password: -password: -Confirm the password: -password: -Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) -Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). -2fa> true -Name of the library. Leave blank to access all non-encrypted libraries. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -library> My Library -Library password (for encrypted libraries only). Leave blank if you pass it through the command line. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank (default) -y/g/n> n -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config -Two-factor authentication: please enter your 2FA code -2fa code> 123456 -Authenticating... -Success! --------------------- -[seafile] -type = seafile -url = http://my.seafile.server/ -user = me\[at]example.com -pass = -2fa = true -library = My Library --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -You\[aq]ll notice your password is blank in the configuration. -It\[aq]s because we only need the password to authenticate you once. -.PP -You specified \f[C]My Library\f[R] during the configuration. -The root of the remote is pointing at the root of the library -\f[C]My Library\f[R]: -.PP -See all files in the library: -.IP -.nf -\f[C] -rclone lsd seafile: -\f[R] -.fi -.PP -Create a new directory inside the library -.IP -.nf -\f[C] -rclone mkdir seafile:directory -\f[R] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone ls seafile:directory -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any + rclone lsd seafile: + +Create a new library + + rclone mkdir seafile:library + +List the contents of a library + + rclone ls seafile:library + +Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any excess files in the library. + + rclone sync --interactive /home/local/directory seafile:library + +### Configuration in library mode + +Here\[aq]s an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: +\f[R] +.fi +.PP +No remotes found, make a new one? +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> seafile Type of storage to configure. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value [snip] XX / +Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for +seafile backend at: https://rclone.org/seafile/ ** +.PP +URL of seafile host to connect to Enter a string value. +Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value 1 / Connect to +cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> +http://my.seafile.server/ User name (usually email address) Enter a +string value. +Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com Password y) Yes type in my own password g) +Generate random password n) No leave this optional password blank +(default) y/g> y Enter the password: password: Confirm the password: +password: Two-factor authentication (\[aq]true\[aq] if the account has +2FA enabled) Enter a boolean value (true or false). +Press Enter for the default (\[dq]false\[dq]). +2fa> true Name of the library. +Leave blank to access all non-encrypted libraries. +Enter a string value. +Press Enter for the default (\[dq]\[dq]). +library> My Library Library password (for encrypted libraries only). +Leave blank if you pass it through the command line. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank (default) y/g/n> n Edit advanced config? +(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor +authentication: please enter your 2FA code 2fa code> 123456 +Authenticating... +Success! -------------------- [seafile] type = seafile url = +http://my.seafile.server/ user = me\[at]example.com pass = 2fa = true +library = My Library -------------------- y) Yes this is OK (default) e) +Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -rclone sync --interactive /home/local/directory seafile: -\f[R] -.fi -.SS --fast-list -.PP -Seafile version 7+ supports \f[C]--fast-list\f[R] which allows you to -use fewer transactions in exchange for more memory. -See the rclone docs (https://rclone.org/docs/#fast-list) for more -details. +You\[aq]ll notice your password is blank in the configuration. It\[aq]s because we only need the password to authenticate you once. + +You specified \[ga]My Library\[ga] during the configuration. The root of the remote is pointing at the +root of the library \[ga]My Library\[ga]: + +See all files in the library: + + rclone lsd seafile: + +Create a new directory inside the library + + rclone mkdir seafile:directory + +List the contents of a directory + + rclone ls seafile:directory + +Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any +excess files in the library. + + rclone sync --interactive /home/local/directory seafile: + + +### --fast-list + +Seafile version 7+ supports \[ga]--fast-list\[ga] which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](https://rclone.org/docs/#fast-list) for more details. Please note this is not supported on seafile server version 6.x -.SS Restricted filename characters -.PP -In addition to the default restricted characters -set (https://rclone.org/overview/#restricted-characters) the following -characters are also replaced: -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -T{ -\[dq] -T}@T{ -0x22 -T}@T{ -\[uFF02] -T} -T{ -\[rs] -T}@T{ -0x5C -T}@T{ -\[uFF3C] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Seafile and rclone link -.PP + + +### Restricted filename characters + +In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) +the following characters are also replaced: + +| Character | Value | Replacement | +| --------- |:-----:|:-----------:| +| / | 0x2F | \[uFF0F] | +| \[dq] | 0x22 | \[uFF02] | +| \[rs] | 0x5C | \[uFF3C] | + +Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), +as they can\[aq]t be used in JSON strings. + +### Seafile and rclone link + Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: -.IP -.nf -\f[C] +\f[R] +.fi +.PP rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ -\f[R] -.fi -.PP +.IP +.nf +\f[C] or if run on a directory you will get: -.IP -.nf -\f[C] -rclone link seafile:dir -http://my.seafile.server/d/9ea2455f6f55478bbb0d/ \f[R] .fi .PP -Please note a share link is unique for each file or directory. -If you run a link command on a file/dir that has already been shared, -you will get the exact same link. -.SS Compatibility -.PP -It has been actively developed using the seafile docker -image (https://github.com/haiwen/seafile-docker) of these versions: - -6.3.4 community edition - 7.0.5 community edition - 7.1.3 community -edition - 9.0.10 community edition -.PP +rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ +.IP +.nf +\f[C] +Please note a share link is unique for each file or directory. If you run a link command on a file/dir +that has already been shared, you will get the exact same link. + +### Compatibility + +It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: +- 6.3.4 community edition +- 7.0.5 community edition +- 7.1.3 community edition +- 9.0.10 community edition + Versions below 6.0 are not supported. -Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work -properly. -.PP -Each new version of \f[C]rclone\f[R] is automatically tested against the -latest docker image (https://hub.docker.com/r/seafileltd/seafile-mc/) of -the seafile community server. -.SS Standard options -.PP +Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work properly. + +Each new version of \[ga]rclone\[ga] is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. + + +### Standard options + Here are the Standard options specific to seafile (seafile). -.SS --seafile-url -.PP + +#### --seafile-url + URL of seafile host to connect to. -.PP + Properties: -.IP \[bu] 2 -Config: url -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.IP \[bu] 2 -Examples: -.RS 2 -.IP \[bu] 2 -\[dq]https://cloud.seafile.com/\[dq] -.RS 2 -.IP \[bu] 2 -Connect to cloud.seafile.com. -.RE -.RE -.SS --seafile-user -.PP + +- Config: url +- Env Var: RCLONE_SEAFILE_URL +- Type: string +- Required: true +- Examples: + - \[dq]https://cloud.seafile.com/\[dq] + - Connect to cloud.seafile.com. + +#### --seafile-user + User name (usually email address). -.PP + Properties: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: true -.SS --seafile-pass -.PP + +- Config: user +- Env Var: RCLONE_SEAFILE_USER +- Type: string +- Required: true + +#### --seafile-pass + Password. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --seafile-2fa -.PP -Two-factor authentication (\[aq]true\[aq] if the account has 2FA -enabled). -.PP + +- Config: pass +- Env Var: RCLONE_SEAFILE_PASS +- Type: string +- Required: false + +#### --seafile-2fa + +Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled). + Properties: -.IP \[bu] 2 -Config: 2fa -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_2FA -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --seafile-library -.PP + +- Config: 2fa +- Env Var: RCLONE_SEAFILE_2FA +- Type: bool +- Default: false + +#### --seafile-library + Name of the library. -.PP + Leave blank to access all non-encrypted libraries. -.PP + Properties: -.IP \[bu] 2 -Config: library -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_LIBRARY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --seafile-library-key -.PP + +- Config: library +- Env Var: RCLONE_SEAFILE_LIBRARY +- Type: string +- Required: false + +#### --seafile-library-key + Library password (for encrypted libraries only). -.PP + Leave blank if you pass it through the command line. -.PP -\f[B]NB\f[R] Input to this must be obscured - see rclone -obscure (https://rclone.org/commands/rclone_obscure/). -.PP + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Properties: -.IP \[bu] 2 -Config: library_key -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_LIBRARY_KEY -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --seafile-auth-token -.PP + +- Config: library_key +- Env Var: RCLONE_SEAFILE_LIBRARY_KEY +- Type: string +- Required: false + +#### --seafile-auth-token + Authentication token. -.PP + Properties: -.IP \[bu] 2 -Config: auth_token -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_AUTH_TOKEN -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS Advanced options -.PP + +- Config: auth_token +- Env Var: RCLONE_SEAFILE_AUTH_TOKEN +- Type: string +- Required: false + +### Advanced options + Here are the Advanced options specific to seafile (seafile). -.SS --seafile-create-library -.PP + +#### --seafile-create-library + Should rclone create a library if it doesn\[aq]t exist. -.PP + Properties: -.IP \[bu] 2 -Config: create_library -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_CREATE_LIBRARY -.IP \[bu] 2 -Type: bool -.IP \[bu] 2 -Default: false -.SS --seafile-encoding -.PP + +- Config: create_library +- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY +- Type: bool +- Default: false + +#### --seafile-encoding + The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_SEAFILE_ENCODING -.IP \[bu] 2 -Type: MultiEncoder -.IP \[bu] 2 -Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 -.SH SFTP -.PP -SFTP is the Secure (or SSH) File Transfer -Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). -.PP + +- Config: encoding +- Env Var: RCLONE_SEAFILE_ENCODING +- Type: MultiEncoder +- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 + + + +# SFTP + +SFTP is the [Secure (or SSH) File Transfer +Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). + The SFTP backend can be used with a number of different providers: -.IP \[bu] 2 -Hetzner Storage Box -.IP \[bu] 2 -rsync.net -.PP -SFTP runs over SSH v2 and is installed as standard with most modern SSH -installations. -.PP -Paths are specified as \f[C]remote:path\f[R]. -If the path does not begin with a \f[C]/\f[R] it is relative to the home -directory of the user. -An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory. -For example, \f[C]rclone lsd remote:\f[R] would list the home directory -of the user configured in the rclone remote config -(\f[C]i.e /home/sftpuser\f[R]). -However, \f[C]rclone lsd remote:/\f[R] would list the root directory for -remote machine (i.e. -\f[C]/\f[R]) -.PP -Note that some SFTP servers will need the leading / - Synology is a good -example of this. -rsync.net and Hetzner, on the other hand, requires users to OMIT the -leading /. -.PP -Note that by default rclone will try to execute shell commands on the -server, see shell access considerations. -.SS Configuration -.PP -Here is an example of making an SFTP configuration. -First run -.IP -.nf -\f[C] -rclone config -\f[R] -.fi -.PP + + +- Hetzner Storage Box +- rsync.net + + +SFTP runs over SSH v2 and is installed as standard with most modern +SSH installations. + +Paths are specified as \[ga]remote:path\[ga]. If the path does not begin with +a \[ga]/\[ga] it is relative to the home directory of the user. An empty path +\[ga]remote:\[ga] refers to the user\[aq]s home directory. For example, \[ga]rclone lsd remote:\[ga] +would list the home directory of the user configured in the rclone remote config +(\[ga]i.e /home/sftpuser\[ga]). However, \[ga]rclone lsd remote:/\[ga] would list the root +directory for remote machine (i.e. \[ga]/\[ga]) + +Note that some SFTP servers will need the leading / - Synology is a +good example of this. rsync.net and Hetzner, on the other hand, requires users to +OMIT the leading /. + +Note that by default rclone will try to execute shell commands on +the server, see [shell access considerations](#shell-access-considerations). + +## Configuration + +Here is an example of making an SFTP configuration. First run + + rclone config + This will guide you through an interactive setup process. -.IP -.nf -\f[C] +\f[R] +.fi +.PP No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / SSH/SFTP - \[rs] \[dq]sftp\[dq] -[snip] -Storage> sftp -SSH host to connect to -Choose a number from below, or type in your own value - 1 / Connect to example.com - \[rs] \[dq]example.com\[dq] -host> example.com -SSH username -Enter a string value. Press Enter for the default (\[dq]$USER\[dq]). -user> sftpuser -SSH port number -Enter a signed integer. Press Enter for the default (22). -port> -SSH password, leave blank to use ssh-agent. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank -y/g/n> n -Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. -key_file> -Remote config --------------------- -[remote] -host = example.com -user = sftpuser -port = -pass = -key_file = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -This remote is called \f[C]remote\f[R] and can now be used like this: -.PP +n) New remote s) Set configuration password q) Quit config n/s/q> n +name> remote Type of storage to configure. +Choose a number from below, or type in your own value [snip] XX / +SSH/SFTP \ \[dq]sftp\[dq] [snip] Storage> sftp SSH host to connect to +Choose a number from below, or type in your own value 1 / Connect to +example.com \ \[dq]example.com\[dq] host> example.com SSH username Enter +a string value. +Press Enter for the default (\[dq]$USER\[dq]). +user> sftpuser SSH port number Enter a signed integer. +Press Enter for the default (22). +port> SSH password, leave blank to use ssh-agent. +y) Yes type in my own password g) Generate random password n) No leave +this optional password blank y/g/n> n Path to unencrypted PEM-encoded +private key file, leave blank to use ssh-agent. +key_file> Remote config -------------------- [remote] host = example.com +user = sftpuser port = pass = key_file = -------------------- y) Yes +this is OK e) Edit this remote d) Delete this remote y/e/d> y +.IP +.nf +\f[C] +This remote is called \[ga]remote\[ga] and can now be used like this: + See all directories in the home directory -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP + + rclone lsd remote: + See all directories in the root directory -.IP -.nf -\f[C] -rclone lsd remote:/ -\f[R] -.fi -.PP + + rclone lsd remote:/ + Make a new directory -.IP -.nf -\f[C] -rclone mkdir remote:path/to/directory -\f[R] -.fi -.PP + + rclone mkdir remote:path/to/directory + List the contents of a directory -.IP -.nf -\f[C] -rclone ls remote:path/to/directory -\f[R] -.fi -.PP -Sync \f[C]/home/local/directory\f[R] to the remote directory, deleting -any excess files in the directory. -.IP -.nf -\f[C] -rclone sync --interactive /home/local/directory remote:directory -\f[R] -.fi -.PP -Mount the remote path \f[C]/srv/www-data/\f[R] to the local path -\f[C]/mnt/www-data\f[R] -.IP -.nf -\f[C] -rclone mount remote:/srv/www-data/ /mnt/www-data -\f[R] -.fi -.SS SSH Authentication -.PP + + rclone ls remote:path/to/directory + +Sync \[ga]/home/local/directory\[ga] to the remote directory, deleting any +excess files in the directory. + + rclone sync --interactive /home/local/directory remote:directory + +Mount the remote path \[ga]/srv/www-data/\[ga] to the local path +\[ga]/mnt/www-data\[ga] + + rclone mount remote:/srv/www-data/ /mnt/www-data + +### SSH Authentication + The SFTP remote supports three authentication methods: -.IP \[bu] 2 -Password -.IP \[bu] 2 -Key file, including certificate signed keys -.IP \[bu] 2 -ssh-agent -.PP -Key files should be PEM-encoded private key files. -For instance \f[C]/home/$USER/.ssh/id_rsa\f[R]. + + * Password + * Key file, including certificate signed keys + * ssh-agent + +Key files should be PEM-encoded private key files. For instance \[ga]/home/$USER/.ssh/id_rsa\[ga]. Only unencrypted OpenSSH or PEM encrypted files are supported. -.PP -The key file can be specified in either an external file (key_file) or -contained within the rclone config file (key_pem). -If using key_pem in the config file, the entry should be on a single -line with new line (\[aq]\[aq] or \[aq]\[aq]) separating lines. -i.e. -.IP -.nf -\f[C] -key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- -\f[R] -.fi -.PP + +The key file can be specified in either an external file (key_file) or contained within the +rclone config file (key_pem). If using key_pem in the config file, the entry should be on a +single line with new line (\[aq]\[rs]n\[aq] or \[aq]\[rs]r\[rs]n\[aq]) separating lines. i.e. + + key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- + This will generate it correctly for key_pem for use in the config: -.IP -.nf -\f[C] -awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa -\f[R] -.fi -.PP -If you don\[aq]t specify \f[C]pass\f[R], \f[C]key_file\f[R], or -\f[C]key_pem\f[R] or \f[C]ask_password\f[R] then rclone will attempt to -contact an ssh-agent. -You can also specify \f[C]key_use_agent\f[R] to force the usage of an -ssh-agent. -In this case \f[C]key_file\f[R] or \f[C]key_pem\f[R] can also be -specified to force the usage of a specific key in the ssh-agent. -.PP -Using an ssh-agent is the only way to load encrypted OpenSSH keys at the -moment. -.PP -If you set the \f[C]ask_password\f[R] option, rclone will prompt for a -password when needed and no password has been configured. -.SS Certificate-signed keys -.PP -With traditional key-based authentication, you configure your private -key only, and the public key built into it will be used during the -authentication process. -.PP -If you have a certificate you may use it to sign your public key, -creating a separate SSH user certificate that should be used instead of -the plain public key extracted from the private key. -Then you must provide the path to the user certificate public key file -in \f[C]pubkey_file\f[R]. -.PP -Note: This is not the traditional public key paired with your private -key, typically saved as \f[C]/home/$USER/.ssh/id_rsa.pub\f[R]. -Setting this path in \f[C]pubkey_file\f[R] will not work. -.PP + + awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa + +If you don\[aq]t specify \[ga]pass\[ga], \[ga]key_file\[ga], or \[ga]key_pem\[ga] or \[ga]ask_password\[ga] then +rclone will attempt to contact an ssh-agent. You can also specify \[ga]key_use_agent\[ga] +to force the usage of an ssh-agent. In this case \[ga]key_file\[ga] or \[ga]key_pem\[ga] can +also be specified to force the usage of a specific key in the ssh-agent. + +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. + +If you set the \[ga]ask_password\[ga] option, rclone will prompt for a password when +needed and no password has been configured. + +#### Certificate-signed keys + +With traditional key-based authentication, you configure your private key only, +and the public key built into it will be used during the authentication process. + +If you have a certificate you may use it to sign your public key, creating a +separate SSH user certificate that should be used instead of the plain public key +extracted from the private key. Then you must provide the path to the +user certificate public key file in \[ga]pubkey_file\[ga]. + +Note: This is not the traditional public key paired with your private key, +typically saved as \[ga]/home/$USER/.ssh/id_rsa.pub\[ga]. Setting this path in +\[ga]pubkey_file\[ga] will not work. + Example: -.IP -.nf -\f[C] -[remote] -type = sftp -host = example.com -user = sftpuser -key_file = \[ti]/id_rsa -pubkey_file = \[ti]/id_rsa-cert.pub \f[R] .fi .PP +[remote] type = sftp host = example.com user = sftpuser key_file = +\[ti]/id_rsa pubkey_file = \[ti]/id_rsa-cert.pub +.IP +.nf +\f[C] If you concatenate a cert with a private key then you can specify the merged file in both places. -.PP -Note: the cert must come first in the file. -e.g. -.IP -.nf -\f[C] + +Note: the cert must come first in the file. e.g. + +\[ga]\[ga]\[ga] cat id_rsa-cert.pub id_rsa > merged_key -\f[R] -.fi -.SS Host key validation -.PP -By default rclone will not check the server\[aq]s host key for -validation. -This can allow an attacker to replace a server with their own and if you -use password authentication then this can lead to that password being -exposed. -.PP -Host key matching, using standard \f[C]known_hosts\f[R] files can be -turned on by enabling the \f[C]known_hosts_file\f[R] option. -This can point to the file maintained by \f[C]OpenSSH\f[R] or can point -to a unique file. -.PP -e.g. -using the OpenSSH \f[C]known_hosts\f[R] file: -.IP -.nf -\f[C] +\[ga]\[ga]\[ga] + +### Host key validation + +By default rclone will not check the server\[aq]s host key for validation. This +can allow an attacker to replace a server with their own and if you use +password authentication then this can lead to that password being exposed. + +Host key matching, using standard \[ga]known_hosts\[ga] files can be turned on by +enabling the \[ga]known_hosts_file\[ga] option. This can point to the file maintained +by \[ga]OpenSSH\[ga] or can point to a unique file. + +e.g. using the OpenSSH \[ga]known_hosts\[ga] file: + +\[ga]\[ga]\[ga] [remote] type = sftp host = example.com @@ -54834,6 +53836,50 @@ Env Var: RCLONE_SFTP_DISABLE_HASHCHECK Type: bool .IP \[bu] 2 Default: false +.SS --sftp-ssh +.PP +Path and arguments to external ssh binary. +.PP +Normally rclone will use its internal ssh library to connect to the SFTP +server. +However it does not implement all possible ssh options so it may be +desirable to use an external ssh binary. +.PP +Rclone ignores all the internal config if you use this option and +expects you to configure the ssh binary with the user/host/port and any +other options you need. +.PP +\f[B]Important\f[R] The ssh command must log in without asking for a +password so needs to be configured with keys or certificates. +.PP +Rclone will run the command supplied either with the additional +arguments \[dq]-s sftp\[dq] to access the SFTP subsystem or with +commands such as \[dq]md5sum /path/to/file\[dq] appended to read +checksums. +.PP +Any arguments with spaces in should be surrounded by \[dq]double +quotes\[dq]. +.PP +An example setting might be: +.IP +.nf +\f[C] +ssh -o ServerAliveInterval=20 user\[at]example.com +\f[R] +.fi +.PP +Note that when using an external ssh binary rclone makes a new ssh +connection for every hash it calculates. +.PP +Properties: +.IP \[bu] 2 +Config: ssh +.IP \[bu] 2 +Env Var: RCLONE_SFTP_SSH +.IP \[bu] 2 +Type: SpaceSepList +.IP \[bu] 2 +Default: .SS Advanced options .PP Here are the Advanced options specific to sftp (SSH/SFTP). @@ -54906,6 +53952,33 @@ rclone sync /home/local/directory remote:/home/directory --sftp-path-override /v \f[R] .fi .PP +To specify only the path to the SFTP remote\[aq]s root, and allow rclone +to add any relative subpaths automatically (including +unwrapping/decrypting remotes as necessary), add the \[aq]\[at]\[aq] +character to the beginning of the path. +.PP +E.g. +the first example above could be rewritten as: +.IP +.nf +\f[C] +rclone sync /home/local/directory remote:/directory --sftp-path-override \[at]/volume2 +\f[R] +.fi +.PP +Note that when using this method with Synology \[dq]home\[dq] folders, +the full \[dq]/homes/USER\[dq] path should be specified instead of +\[dq]/home\[dq]. +.PP +E.g. +the second example above should be rewritten as: +.IP +.nf +\f[C] +rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override \[at]/volume1 +\f[R] +.fi +.PP Properties: .IP \[bu] 2 Config: path_override @@ -55033,6 +54106,19 @@ Specifies the path or command to run a sftp server on the remote host. .PP The subsystem option is ignored when server_command is defined. .PP +If adding server_command to the configuration file please note that it +should not be enclosed in quotes, since that will make rclone fail. +.PP +A working example is: +.IP +.nf +\f[C] +[remote_name] +type = sftp +server_command = sudo /usr/libexec/openssh/sftp-server +\f[R] +.fi +.PP Properties: .IP \[bu] 2 Config: server_command @@ -55324,6 +54410,30 @@ Env Var: RCLONE_SFTP_HOST_KEY_ALGORITHMS Type: SpaceSepList .IP \[bu] 2 Default: +.SS --sftp-socks-proxy +.PP +Socks 5 proxy host. +.PP +Supports the format user:pass\[at]host:port, user\[at]host:port, +host:port. +.PP +Example: +.IP +.nf +\f[C] +myUser:myPass\[at]localhost:9005 +\f[R] +.fi +.PP +Properties: +.IP \[bu] 2 +Config: socks_proxy +.IP \[bu] 2 +Env Var: RCLONE_SFTP_SOCKS_PROXY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP On some SFTP servers (e.g. @@ -56781,22 +55891,26 @@ days. used space can however been seen in the uptobox web interface. .SH Union .PP -The \f[C]union\f[R] remote provides a unification similar to UnionFS -using other remotes. -.PP -Paths may be as deep as required or a local path, e.g. -\f[C]remote:directory/subdirectory\f[R] or -\f[C]/directory/subdirectory\f[R]. +The \f[C]union\f[R] backend joins several remotes together to make a +single unified view of them. .PP During the initial setup with \f[C]rclone config\f[R] you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. .PP -Attribute \f[C]:ro\f[R] and \f[C]:nc\f[R] can be attach to the end of -path to tag the remote as \f[B]read only\f[R] or \f[B]no create\f[R], -e.g. +The attributes \f[C]:ro\f[R], \f[C]:nc\f[R] and \f[C]:nc\f[R] can be +attached to the end of the remote to tag the remote as \f[B]read +only\f[R], \f[B]no create\f[R] or \f[B]writeback\f[R], e.g. \f[C]remote:directory/subdirectory:ro\f[R] or \f[C]remote:directory/subdirectory:nc\f[R]. +.IP \[bu] 2 +\f[C]:ro\f[R] means files will only be read from here and never written +.IP \[bu] 2 +\f[C]:nc\f[R] means new files or directories won\[aq]t be created here +.IP \[bu] 2 +\f[C]:writeback\f[R] means files found in different remotes will be +written back here. +See the writeback section for more info. .PP Subfolders can be used in upstream remotes. Assume a union remote named \f[C]backup\f[R] with the remotes @@ -56804,8 +55918,7 @@ Assume a union remote named \f[C]backup\f[R] with the remotes Invoking \f[C]rclone mkdir backup:desktop\f[R] is exactly the same as invoking \f[C]rclone mkdir mydrive:private/backup/desktop\f[R]. .PP -There will be no special handling of paths containing \f[C]..\f[R] -segments. +There is no special handling of paths containing \f[C]..\f[R] segments. Invoking \f[C]rclone mkdir backup:../desktop\f[R] is exactly the same as invoking \f[C]rclone mkdir mydrive:private/backup/../desktop\f[R]. .SS Configuration @@ -57150,6 +56263,40 @@ Calls \f[B]all\f[R] and then randomizes. Returns only one upstream. T} .TE +.SS Writeback +.PP +The tag \f[C]:writeback\f[R] on an upstream remote can be used to make a +simple cache system like this: +.IP +.nf +\f[C] +[union] +type = union +action_policy = all +create_policy = all +search_policy = ff +upstreams = /local:writeback remote:dir +\f[R] +.fi +.PP +When files are opened for read, if the file is in \f[C]remote:dir\f[R] +but not \f[C]/local\f[R] then rclone will copy the file entirely into +\f[C]/local\f[R] before returning a reference to the file in +\f[C]/local\f[R]. +The copy will be done with the equivalent of \f[C]rclone copy\f[R] so +will use \f[C]--multi-thread-streams\f[R] if configured. +Any copies will be logged with an INFO log. +.PP +When files are written, they will be written to both +\f[C]remote:dir\f[R] and \f[C]/local\f[R]. +.PP +As many remotes as desired can be added to \f[C]upstreams\f[R] but there +should only be one \f[C]:writeback\f[R] tag. +.PP +Rclone does not manage the \f[C]:writeback\f[R] remote in any way other +than writing files back to it. +So if you need to expire old files or manage the size then you will have +to do this yourself. .SS Standard options .PP Here are the Standard options specific to union (Union merges the @@ -59481,6 +58628,443 @@ Options: .IP \[bu] 2 \[dq]error\[dq]: return an error based on option value .SH Changelog +.SS v1.64.0 - 2023-09-11 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0) +.IP \[bu] 2 +New backends +.RS 2 +.IP \[bu] 2 +Proton Drive (https://rclone.org/protondrive/) (Chun-Hung Tseng) +.IP \[bu] 2 +Quatrix (https://rclone.org/quatrix/) (Oksana, Volodymyr Kit) +.IP \[bu] 2 +New S3 providers +.RS 2 +.IP \[bu] 2 +Synology C2 (https://rclone.org/s3/#synology-c2) (BakaWang) +.IP \[bu] 2 +Leviia (https://rclone.org/s3/#leviia) (Benjamin) +.RE +.IP \[bu] 2 +New Jottacloud providers +.RS 2 +.IP \[bu] 2 +Onlime (https://rclone.org/jottacloud/) (Fjodor42) +.IP \[bu] 2 +Telia Sky (https://rclone.org/jottacloud/) (NoLooseEnds) +.RE +.RE +.IP \[bu] 2 +Major changes +.RS 2 +.IP \[bu] 2 +Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj Ghosh, Edwin +Mackenzie-Owen) +.RS 2 +.IP \[bu] 2 +Multi-thread transfers are now available when transferring to: +.RS 2 +.IP \[bu] 2 +\f[C]local\f[R], \f[C]s3\f[R], \f[C]azureblob\f[R], \f[C]b2\f[R], +\f[C]oracleobjectstorage\f[R] and \f[C]smb\f[R] +.RE +.IP \[bu] 2 +This greatly improves transfer speed between two network sources. +.IP \[bu] 2 +In memory buffering has been unified between all backends and should +share memory better. +.IP \[bu] 2 +See --multi-thread docs (https://rclone.org/docs/#multi-thread-cutoff) +for more info +.RE +.RE +.IP \[bu] 2 +New commands +.RS 2 +.IP \[bu] 2 +\f[C]rclone config redacted\f[R] support mechanism for showing redacted +config (Nick Craig-Wood) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +accounting +.RS 2 +.IP \[bu] 2 +Show server side stats in own lines and not as bytes transferred (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +bisync +.RS 2 +.IP \[bu] 2 +Add new \f[C]--ignore-listing-checksum\f[R] flag to distinguish from +\f[C]--ignore-checksum\f[R] (nielash) +.IP \[bu] 2 +Add experimental \f[C]--resilient\f[R] mode to allow recovery from +self-correctable errors (nielash) +.IP \[bu] 2 +Add support for \f[C]--create-empty-src-dirs\f[R] (nielash) +.IP \[bu] 2 +Dry runs no longer commit filter changes (nielash) +.IP \[bu] 2 +Enforce \f[C]--check-access\f[R] during \f[C]--resync\f[R] (nielash) +.IP \[bu] 2 +Apply filters correctly during deletes (nielash) +.IP \[bu] 2 +Equality check before renaming (leave identical files alone) (nielash) +.IP \[bu] 2 +Fix \f[C]dryRun\f[R] rc parameter being ignored (nielash) +.RE +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Update to \f[C]go1.21\f[R] and make \f[C]go1.19\f[R] the minimum +required version (Anagh Kumar Baranwal, Nick Craig-Wood) +.IP \[bu] 2 +Update dependencies (Nick Craig-Wood) +.IP \[bu] 2 +Add snap installation (hideo aoyama) +.IP \[bu] 2 +Change Winget Releaser job to \f[C]ubuntu-latest\f[R] (sitiom) +.RE +.IP \[bu] 2 +cmd: Refactor and use sysdnotify in more commands (eNV25) +.IP \[bu] 2 +config: Add \f[C]--multi-thread-chunk-size\f[R] flag (Vitor Gomes) +.IP \[bu] 2 +doc updates (antoinetran, Benjamin, Bj\[/o]rn Smith, Dean Attali, +gabriel-suela, James Braza, Justin Hellings, kapitainsky, Mahad, +Masamune3210, Nick Craig-Wood, Nihaal Sangha, Niklas Hamb\[:u]chen, +Raymond Berger, r-ricci, Sawada Tsunayoshi, Tiago Boeing, Vladislav +Vorobev) +.IP \[bu] 2 +fs +.RS 2 +.IP \[bu] 2 +Use atomic types everywhere (Roberto Ricci) +.IP \[bu] 2 +When \f[C]--max-transfer\f[R] limit is reached exit with code (10) +(kapitainsky) +.IP \[bu] 2 +Add rclone completion powershell - basic implementation only (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +http servers: Allow CORS to be set with \f[C]--allow-origin\f[R] flag +(yuudi) +.IP \[bu] 2 +lib/rest: Remove unnecessary \f[C]nil\f[R] check (Eng Zer Jun) +.IP \[bu] 2 +ncdu: Add keybinding to rescan filesystem (eNV25) +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Add \f[C]executeId\f[R] to job listings (yuudi) +.IP \[bu] 2 +Add \f[C]core/du\f[R] to measure local disk usage (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]operations/settier\f[R] to API (Drew Stinnett) +.RE +.IP \[bu] 2 +rclone test info: Add \f[C]--check-base32768\f[R] flag to check can +store all base32768 characters (Nick Craig-Wood) +.IP \[bu] 2 +rmdirs: Remove directories concurrently controlled by +\f[C]--checkers\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +accounting: Don\[aq]t stop calculating average transfer speed until the +operation is complete (Jacob Hands) +.IP \[bu] 2 +fs: Fix \f[C]transferTime\f[R] not being set in JSON logs (Jacob Hands) +.IP \[bu] 2 +fshttp: Fix \f[C]--bind 0.0.0.0\f[R] allowing IPv6 and +\f[C]--bind ::0\f[R] allowing IPv4 (Nick Craig-Wood) +.IP \[bu] 2 +operations: Fix overlapping check on case insensitive file systems (Nick +Craig-Wood) +.IP \[bu] 2 +serve dlna: Fix MIME type if backend can\[aq]t identify it (Nick +Craig-Wood) +.IP \[bu] 2 +serve ftp: Fix race condition when using the auth proxy (Nick +Craig-Wood) +.IP \[bu] 2 +serve sftp: Fix hash calculations with \f[C]--vfs-cache-mode full\f[R] +(Nick Craig-Wood) +.IP \[bu] 2 +serve webdav: Fix error: Expecting fs.Object or fs.Directory, got +\f[C]nil\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +sync: Fix lockup with \f[C]--cutoff-mode=soft\f[R] and +\f[C]--max-duration\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +fix: Mount parsing for linux (Anagh Kumar Baranwal) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Add \f[C]--vfs-cache-min-free-space\f[R] to control minimum free space +on the disk containing the cache (Nick Craig-Wood) +.IP \[bu] 2 +Added cache cleaner for directories to reduce memory usage (Anagh Kumar +Baranwal) +.IP \[bu] 2 +Update parent directory modtimes on vfs actions (David Pedersen) +.IP \[bu] 2 +Keep virtual directory status accurate and reduce deadlock potential +(Anagh Kumar Baranwal) +.IP \[bu] 2 +Make sure struct field is aligned for atomic access (Roberto Ricci) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Rmdir return an error if the path is not a dir (zjx20) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Implement \f[C]OpenChunkWriter\f[R] and multi-thread uploads (Nick +Craig-Wood) +.IP \[bu] 2 +Fix creation of directory markers (Nick Craig-Wood) +.IP \[bu] 2 +Fix purging with directory markers (Nick Craig-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Implement \f[C]OpenChunkWriter\f[R] and multi-thread uploads (Nick +Craig-Wood) +.IP \[bu] 2 +Fix rclone link when object path contains special characters (Alishan +Ladhani) +.RE +.IP \[bu] 2 +Box +.RS 2 +.IP \[bu] 2 +Add polling support (David Sze) +.IP \[bu] 2 +Add \f[C]--box-impersonate\f[R] to impersonate a user ID (Nick +Craig-Wood) +.IP \[bu] 2 +Fix unhelpful decoding of error messages into decimal numbers (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Update documentation to mention issue with small files (Ricardo D\[aq]O. +Albanus) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Fix ChangeNotify (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Add \f[C]--drive-fast-list-bug-fix\f[R] to control ListR bug workaround +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +Fichier +.RS 2 +.IP \[bu] 2 +Implement \f[C]DirMove\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Fix error code parsing (alexia) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Add socks_proxy support for SOCKS5 proxies (Zach) +.IP \[bu] 2 +Fix 425 \[dq]TLS session of data connection not resumed\[dq] errors +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +Hdfs +.RS 2 +.IP \[bu] 2 +Retry \[dq]replication in progress\[dq] errors when uploading (Nick +Craig-Wood) +.IP \[bu] 2 +Fix uploading to the wrong object on Update with overriden remote name +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +HTTP +.RS 2 +.IP \[bu] 2 +CORS should not be sent if not set (yuudi) +.IP \[bu] 2 +Fix webdav OPTIONS response (yuudi) +.RE +.IP \[bu] 2 +Opendrive +.RS 2 +.IP \[bu] 2 +Fix List on a just deleted and remade directory (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Oracleobjectstorage +.RS 2 +.IP \[bu] 2 +Use rclone\[aq]s rate limiter in mutipart transfers (Manoj Ghosh) +.IP \[bu] 2 +Implement \f[C]OpenChunkWriter\f[R] and multi-thread uploads (Manoj +Ghosh) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Refactor multipart upload to use \f[C]OpenChunkWriter\f[R] and +\f[C]ChunkWriter\f[R] (Vitor Gomes) +.IP \[bu] 2 +Factor generic multipart upload into \f[C]lib/multipart\f[R] (Nick +Craig-Wood) +.IP \[bu] 2 +Fix purging of root directory with \f[C]--s3-directory-markers\f[R] +(Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]rclone backend set\f[R] command to update the running config +(Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]rclone backend restore-status\f[R] command (Nick Craig-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Stop uploads re-using the same ssh connection to improve performance +(Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]--sftp-ssh\f[R] to specify an external ssh binary to use (Nick +Craig-Wood) +.IP \[bu] 2 +Add socks_proxy support for SOCKS5 proxies (Zach) +.IP \[bu] 2 +Support dynamic \f[C]--sftp-path-override\f[R] (nielash) +.IP \[bu] 2 +Fix spurious warning when using \f[C]--sftp-ssh\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Smb +.RS 2 +.IP \[bu] 2 +Implement multi-threaded writes for copies to smb (Edwin Mackenzie-Owen) +.RE +.IP \[bu] 2 +Storj +.RS 2 +.IP \[bu] 2 +Performance improvement for large file uploads (Kaloyan Raev) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Fix HEADing 0-length objects when \f[C]--swift-no-large-objects\f[R] set +(Julian Lepinski) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Add \f[C]:writback\f[R] to act as a simple cache (Nick Craig-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Nextcloud: fix segment violation in low-level retry (Paul) +.RE +.IP \[bu] 2 +Zoho +.RS 2 +.IP \[bu] 2 +Remove Range requests workarounds to fix integration tests (Nick +Craig-Wood) +.RE +.SS v1.63.1 - 2023-07-17 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.63.0...v1.63.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build: Fix macos builds for versions < 12 (Anagh Kumar Baranwal) +.IP \[bu] 2 +dirtree: Fix performance with large directories of directories and +\f[C]--fast-list\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +operations +.RS 2 +.IP \[bu] 2 +Fix deadlock when using \f[C]lsd\f[R]/\f[C]ls\f[R] with +\f[C]--progress\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Fix \f[C].rclonelink\f[R] files not being converted back to symlinks +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +doc fixes (Dean Attali, Mahad, Nick Craig-Wood, Sawada Tsunayoshi, +Vladislav Vorobev) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Fix partial directory read for corrupted filesystem (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Box +.RS 2 +.IP \[bu] 2 +Fix reconnect failing with HTTP 400 Bad Request (albertony) +.RE +.IP \[bu] 2 +Smb +.RS 2 +.IP \[bu] 2 +Fix \[dq]Statfs failed: bucket or container name is needed\[dq] when +mounting (Nick Craig-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Nextcloud: fix must use /dav/files/USER endpoint not /webdav error +(Paul) +.IP \[bu] 2 +Nextcloud chunking: add more guidance for the user to check the config +(darix) +.RE .SS v1.63.0 - 2023-06-30 .PP See commits (https://github.com/rclone/rclone/compare/v1.62.0...v1.63.0) @@ -73015,8 +72599,6 @@ Felix Bu\[u0308]nemann .IP \[bu] 2 At\['i]lio Ant\[^o]nio .IP \[bu] 2 -Roberto Ricci -.IP \[bu] 2 Carlo Mion .IP \[bu] 2 Chris Lu @@ -73411,12 +72993,92 @@ Peter Fern zzq .IP \[bu] 2 mac-15 +.IP \[bu] 2 +Sawada Tsunayoshi <34431649+TsunayoshiSawada@users.noreply.github.com> +.IP \[bu] 2 +Dean Attali +.IP \[bu] 2 +Fjodor42 +.IP \[bu] 2 +BakaWang +.IP \[bu] 2 +Mahad <56235065+Mahad-lab@users.noreply.github.com> +.IP \[bu] 2 +Vladislav Vorobev +.IP \[bu] 2 +darix +.IP \[bu] 2 +Benjamin <36415086+bbenjamin-sys@users.noreply.github.com> +.IP \[bu] 2 +Chun-Hung Tseng +.IP \[bu] 2 +Ricardo D\[aq]O. +Albanus +.IP \[bu] 2 +gabriel-suela +.IP \[bu] 2 +Tiago Boeing +.IP \[bu] 2 +Edwin Mackenzie-Owen +.IP \[bu] 2 +Niklas Hamb\[:u]chen +.IP \[bu] 2 +yuudi +.IP \[bu] 2 +Zach +.IP \[bu] 2 +nielash <31582349+nielash@users.noreply.github.com> +.IP \[bu] 2 +Julian Lepinski +.IP \[bu] 2 +Raymond Berger +.IP \[bu] 2 +Nihaal Sangha +.IP \[bu] 2 +Masamune3210 <1053504+Masamune3210@users.noreply.github.com> +.IP \[bu] 2 +James Braza +.IP \[bu] 2 +antoinetran +.IP \[bu] 2 +alexia +.IP \[bu] 2 +nielash +.IP \[bu] 2 +Vitor Gomes +.IP \[bu] 2 +Jacob Hands +.IP \[bu] 2 +hideo aoyama <100831251+boukendesho@users.noreply.github.com> +.IP \[bu] 2 +Roberto Ricci +.IP \[bu] 2 +Bj\[/o]rn Smith +.IP \[bu] 2 +Alishan Ladhani <8869764+aladh@users.noreply.github.com> +.IP \[bu] 2 +zjx20 +.IP \[bu] 2 +Oksana <142890647+oks-maytech@users.noreply.github.com> +.IP \[bu] 2 +Volodymyr Kit +.IP \[bu] 2 +David Pedersen +.IP \[bu] 2 +Drew Stinnett .SH Contact the rclone project .SS Forum .PP Forum for questions and general discussion: .IP \[bu] 2 https://forum.rclone.org +.SS Business support +.PP +For business support or sponsorship enquiries please see: +.IP \[bu] 2 +https://rclone.com/ +.IP \[bu] 2 +sponsorship\[at]rclone.com .SS GitHub repository .PP The project\[aq]s repository is located at: @@ -73426,15 +73088,18 @@ https://github.com/rclone/rclone There you can file bug reports or contribute with pull requests. .SS Twitter .PP -You can also follow me on twitter for rclone announcements: +You can also follow Nick on twitter for rclone announcements: .IP \[bu] 2 [\[at]njcw](https://twitter.com/njcw) .SS Email .PP Or if all else fails or you want to ask something private or -confidential email Nick Craig-Wood (mailto:nick@craig-wood.com). -Please don\[aq]t email me requests for help - those are better directed -to the forum. -Thanks! +confidential +.IP \[bu] 2 +info\[at]rclone.com +.PP +Please don\[aq]t email requests for help to this address - those are +better directed to the forum unless you\[aq]d like to sign up for +business support. .SH AUTHORS Nick Craig-Wood.