diff --git a/MANUAL.html b/MANUAL.html index eb2a7d197..4c0d99e61 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -13,75 +13,13 @@ div.column{display: inline-block; vertical-align: top; width: 50%;} div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;} ul.task-list{list-style: none;} - pre > code.sourceCode { white-space: pre; position: relative; } - pre > code.sourceCode > span { display: inline-block; line-height: 1.25; } - pre > code.sourceCode > span:empty { height: 1.2em; } - code.sourceCode > span { color: inherit; text-decoration: inherit; } - div.sourceCode { margin: 1em 0; } - pre.sourceCode { margin: 0; } - @media screen { - div.sourceCode { overflow: auto; } - } - @media print { - pre > code.sourceCode { white-space: pre-wrap; } - pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; } - } - pre.numberSource code - { counter-reset: source-line 0; } - pre.numberSource code > span - { position: relative; left: -4em; counter-increment: source-line; } - pre.numberSource code > span > a:first-child::before - { content: counter(source-line); - position: relative; left: -1em; text-align: right; vertical-align: baseline; - border: none; display: inline-block; - -webkit-touch-callout: none; -webkit-user-select: none; - -khtml-user-select: none; -moz-user-select: none; - -ms-user-select: none; user-select: none; - padding: 0 4px; width: 4em; - color: #aaaaaa; - } - pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } - div.sourceCode - { } - @media screen { - pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; } - } - code span.al { color: #ff0000; font-weight: bold; } /* Alert */ - code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ - code span.at { color: #7d9029; } /* Attribute */ - code span.bn { color: #40a070; } /* BaseN */ - code span.bu { } /* BuiltIn */ - code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */ - code span.ch { color: #4070a0; } /* Char */ - code span.cn { color: #880000; } /* Constant */ - code span.co { color: #60a0b0; font-style: italic; } /* Comment */ - code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ - code span.do { color: #ba2121; font-style: italic; } /* Documentation */ - code span.dt { color: #902000; } /* DataType */ - code span.dv { color: #40a070; } /* DecVal */ - code span.er { color: #ff0000; font-weight: bold; } /* Error */ - code span.ex { } /* Extension */ - code span.fl { color: #40a070; } /* Float */ - code span.fu { color: #06287e; } /* Function */ - code span.im { } /* Import */ - code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ - code span.kw { color: #007020; font-weight: bold; } /* Keyword */ - code span.op { color: #666666; } /* Operator */ - code span.ot { color: #007020; } /* Other */ - code span.pp { color: #bc7a00; } /* Preprocessor */ - code span.sc { color: #4070a0; } /* SpecialChar */ - code span.ss { color: #bb6688; } /* SpecialString */ - code span.st { color: #4070a0; } /* String */ - code span.va { color: #19177c; } /* Variable */ - code span.vs { color: #4070a0; } /* VerbatimString */ - code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */

rclone(1) User Manual

Nick Craig-Wood

-

Mar 18, 2022

+

Jul 09, 2022

Rclone syncs your files to cloud storage

rclone logo

@@ -130,7 +68,7 @@
  • Move files to cloud storage deleting the local after verification
  • Check hashes and for missing/extra files
  • Mount your cloud storage as a network disk
  • -
  • Serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna
  • +
  • Serve local or remote files over HTTP/WebDav/FTP/SFTP/DLNA
  • Experimental Web based GUI
  • Supported providers

    @@ -144,8 +82,11 @@
  • Backblaze B2
  • Box
  • Ceph
  • +
  • China Mobile Ecloud Elastic Object Storage (EOS)
  • +
  • Arvan Cloud Object Storage (AOS)
  • Citrix ShareFile
  • C14
  • +
  • Cloudflare R2
  • DigitalOcean Spaces
  • Digi Storage
  • Dreamhost
  • @@ -156,10 +97,14 @@
  • Google Drive
  • Google Photos
  • HDFS
  • +
  • Hetzner Storage Box
  • +
  • HiDrive
  • HTTP
  • Hubic
  • +
  • Internet Archive
  • Jottacloud
  • IBM COS S3
  • +
  • IDrive e2
  • Koofr
  • Mail.ru Cloud
  • Memset Memstore
  • @@ -197,7 +142,19 @@
  • Zoho WorkDrive
  • The local filesystem
  • -

    Links

    +

    Virtual providers

    +

    These backends adapt or modify other storage providers:

    + +

    rclone version

    Show the version number.

    -

    Synopsis

    +

    Synopsis

    Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).

    For example:

    $ rclone version
    @@ -807,7 +785,7 @@ beta:   1.42.0.5      (released 2018-06-17)
     
     

    rclone cleanup

    Clean up the remote if possible.

    -

    Synopsis

    +

    Synopsis

    Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.

    rclone cleanup remote:path [flags]

    Options

    @@ -819,10 +797,10 @@ beta: 1.42.0.5 (released 2018-06-17)

    rclone dedupe

    Interactively find duplicate filenames and delete/rename them.

    -

    Synopsis

    +

    Synopsis

    By default dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is known as deduping by name.

    Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.

    -

    However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.

    +

    However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.

    If deduping by name, first rclone will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.

    Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive.

    dedupe considers files to be identical if they have the same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes.

    @@ -898,7 +876,7 @@ two-3.txt: renamed from: two.txt

    rclone about

    Get quota information from the remote.

    -

    Synopsis

    +

    Synopsis

    rclone about prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.

    E.g. Typical output from rclone about remote: is:

    Total:   17 GiB
    @@ -944,7 +922,7 @@ Other:   8849156022

    rclone authorize

    Remote authorization.

    -

    Synopsis

    +

    Synopsis

    Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

    Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.

    rclone authorize [flags]
    @@ -958,7 +936,7 @@ Other: 8849156022

    rclone backend

    Run a backend-specific command.

    -

    Synopsis

    +

    Synopsis

    This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.

    You can discover what commands a backend implements by using

    rclone backend help remote:
    @@ -982,7 +960,7 @@ rclone backend help <backendname>

    rclone bisync

    Perform bidirectonal synchronization between two paths.

    -

    Synopsis

    +

    Synopsis

    Perform bidirectonal synchronization between two paths.

    Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa.

    See full bisync description for details.

    @@ -1006,7 +984,7 @@ rclone backend help <backendname>

    rclone cat

    Concatenates any files and sends them to stdout.

    -

    Synopsis

    +

    Synopsis

    rclone cat sends any files to standard output.

    You can use it like this to output a single file

    rclone cat remote:path/to/file
    @@ -1030,7 +1008,7 @@ rclone backend help <backendname>

    rclone checksum

    Checks the files in the source against a SUM file.

    -

    Synopsis

    +

    Synopsis

    Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.

    If you supply the --download flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

    Note that hash values in the SUM file are treated as case insensitive.

    @@ -1061,8 +1039,8 @@ rclone backend help <backendname>
  • rclone - Show help for rclone commands, flags and backends.
  • rclone completion

    -

    generate the autocompletion script for the specified shell

    -

    Synopsis

    +

    Generate the autocompletion script for the specified shell

    +

    Synopsis

    Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script.

    Options

      -h, --help   help for completion
    @@ -1070,18 +1048,23 @@ rclone backend help <backendname>

    SEE ALSO

    rclone completion bash

    -

    generate the autocompletion script for bash

    -

    Synopsis

    +

    Generate the autocompletion script for bash

    +

    Synopsis

    Generate the autocompletion script for the bash shell.

    This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.

    -

    To load completions in your current shell session: $ source <(rclone completion bash)

    -

    To load completions for every new session, execute once: Linux: $ rclone completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone

    +

    To load completions in your current shell session:

    +
    source <(rclone completion bash)
    +

    To load completions for every new session, execute once:

    +

    Linux:

    +
    rclone completion bash > /etc/bash_completion.d/rclone
    +

    macOS:

    +
    rclone completion bash > /usr/local/etc/bash_completion.d/rclone

    You will need to start a new shell for this setup to take effect.

    rclone completion bash

    Options

    @@ -1090,14 +1073,16 @@ rclone backend help <backendname>

    See the global flags page for global options not listed here.

    SEE ALSO

    rclone completion fish

    -

    generate the autocompletion script for fish

    -

    Synopsis

    +

    Generate the autocompletion script for fish

    +

    Synopsis

    Generate the autocompletion script for the fish shell.

    -

    To load completions in your current shell session: $ rclone completion fish | source

    -

    To load completions for every new session, execute once: $ rclone completion fish > ~/.config/fish/completions/rclone.fish

    +

    To load completions in your current shell session:

    +
    rclone completion fish | source
    +

    To load completions for every new session, execute once:

    +
    rclone completion fish > ~/.config/fish/completions/rclone.fish

    You will need to start a new shell for this setup to take effect.

    rclone completion fish [flags]

    Options

    @@ -1106,13 +1091,14 @@ rclone backend help <backendname>

    See the global flags page for global options not listed here.

    SEE ALSO

    rclone completion powershell

    -

    generate the autocompletion script for powershell

    -

    Synopsis

    +

    Generate the autocompletion script for powershell

    +

    Synopsis

    Generate the autocompletion script for powershell.

    -

    To load completions in your current shell session: PS C:> rclone completion powershell | Out-String | Invoke-Expression

    +

    To load completions in your current shell session:

    +
    rclone completion powershell | Out-String | Invoke-Expression

    To load completions for every new session, add the output of the above command to your powershell profile.

    rclone completion powershell [flags]

    Options

    @@ -1121,15 +1107,19 @@ rclone backend help <backendname>

    See the global flags page for global options not listed here.

    SEE ALSO

    rclone completion zsh

    -

    generate the autocompletion script for zsh

    -

    Synopsis

    +

    Generate the autocompletion script for zsh

    +

    Synopsis

    Generate the autocompletion script for the zsh shell.

    If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:

    -

    $ echo "autoload -U compinit; compinit" >> ~/.zshrc

    -

    To load completions for every new session, execute once: # Linux: $ rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone

    +
    echo "autoload -U compinit; compinit" >> ~/.zshrc
    +

    To load completions for every new session, execute once:

    +

    Linux:

    +
    rclone completion zsh > "${fpath[1]}/_rclone"
    +

    macOS:

    +
    rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone

    You will need to start a new shell for this setup to take effect.

    rclone completion zsh [flags]

    Options

    @@ -1138,11 +1128,11 @@ rclone backend help <backendname>

    See the global flags page for global options not listed here.

    SEE ALSO

    rclone config create

    Create a new remote with name, type and options.

    -

    Synopsis

    +

    Synopsis

    Create a new remote of name with type and options. The options should be passed in pairs of key value or as key=value.

    For example, to make a swift remote of name myremote using auto config you would do:

    rclone config create myremote swift env_auth true
    @@ -1223,7 +1213,7 @@ rclone config create myremote swift env_auth=true

    rclone config disconnect

    Disconnects user from remote

    -

    Synopsis

    +

    Synopsis

    This disconnects the remote: passed in to the cloud storage system.

    This normally means revoking the oauth token.

    To reconnect use "rclone config reconnect".

    @@ -1247,7 +1237,7 @@ rclone config create myremote swift env_auth=true

    rclone config edit

    Enter an interactive configuration session.

    -

    Synopsis

    +

    Synopsis

    Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

    rclone config edit [flags]

    Options

    @@ -1269,7 +1259,7 @@ rclone config create myremote swift env_auth=true

    rclone config password

    Update password in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's password. The password should be passed in pairs of key password or as key=password. The password should be passed in in clear (unobscured).

    For example, to set password of a remote of name myremote you would do:

    rclone config password myremote fieldname mypassword
    @@ -1305,7 +1295,7 @@ rclone config password myremote fieldname=mypassword

    rclone config reconnect

    Re-authenticates user with remote.

    -

    Synopsis

    +

    Synopsis

    This reconnects remote: passed in to the cloud storage system.

    To disconnect the remote use "rclone config disconnect".

    This normally means going through the interactive oauth flow again.

    @@ -1339,7 +1329,7 @@ rclone config password myremote fieldname=mypassword

    rclone config update

    Update options in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's options. The options should be passed in pairs of key value or as key=value.

    For example, to update the env_auth field of a remote of name myremote you would do:

    rclone config update myremote env_auth true
    @@ -1410,7 +1400,7 @@ rclone config update myremote env_auth=true

    rclone config userinfo

    Prints info about logged in user of remote.

    -

    Synopsis

    +

    Synopsis

    This prints the details of the person logged in to the cloud storage system.

    rclone config userinfo remote: [flags]

    Options

    @@ -1423,9 +1413,9 @@ rclone config update myremote env_auth=true

    rclone copyto

    Copy files from source to dest, skipping identical files.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it copies it to a file or directory named dest:path.

    -

    This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

    +

    This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

    So

    rclone copyto src dst

    where src and dst are rclone paths, either remote:path or /path/to/local or C:.

    @@ -1447,18 +1437,19 @@ if src is directory

    rclone copyurl

    Copy url content to dest.

    -

    Synopsis

    +

    Synopsis

    Download a URL's content and copy it to the destination without saving it in temporary storage.

    -

    Setting --auto-filename will cause the file name to be retrieved from the URL (after any redirections) and used in the destination path. With --print-filename in addition, the resulting file name will be printed.

    +

    Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path. With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.

    Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.

    Setting --stdout or making the output file name - will cause the output to be written to standard output.

    rclone copyurl https://example.com dest:path [flags]

    Options

    -
      -a, --auto-filename    Get the file name from the URL and use it for destination file path
    -  -h, --help             help for copyurl
    -      --no-clobber       Prevent overwriting file with same name
    -  -p, --print-filename   Print the resulting name from --auto-filename
    -      --stdout           Write the output to stdout rather than a file
    +
      -a, --auto-filename     Get the file name from the URL and use it for destination file path
    +      --header-filename   Get the file name from the Content-Disposition header
    +  -h, --help              help for copyurl
    +      --no-clobber        Prevent overwriting file with same name
    +  -p, --print-filename    Print the resulting name from --auto-filename
    +      --stdout            Write the output to stdout rather than a file

    See the global flags page for global options not listed here.

    SEE ALSO

    rclone cryptcheck

    Cryptcheck checks the integrity of a crypted remote.

    -

    Synopsis

    -

    rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.

    +

    Synopsis

    +

    rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.

    For it to work the underlying remote of the cryptedremote must support some kind of checksum.

    It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

    Use it like this

    @@ -1502,14 +1493,14 @@ if src is directory

    rclone cryptdecode

    Cryptdecode returns unencrypted file names.

    -

    Synopsis

    +

    Synopsis

    rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    -

    If you supply the --reverse flag, it will return encrypted file names.

    +

    If you supply the --reverse flag, it will return encrypted file names.

    use it like this

    rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
     
     rclone cryptdecode --reverse encryptedremote: filename1 filename2
    -

    Another way to accomplish this is by using the rclone backend encode (or decode)command. See the documentation on the crypt overlay for more info.

    +

    Another way to accomplish this is by using the rclone backend encode (or decode) command. See the documentation on the crypt overlay for more info.

    rclone cryptdecode encryptedremote: encryptedfilename [flags]

    Options

      -h, --help      help for cryptdecode
    @@ -1521,7 +1512,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone deletefile

    Remove a single file from remote.

    -

    Synopsis

    +

    Synopsis

    Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

    rclone deletefile remote:path [flags]

    Options

    @@ -1533,8 +1524,8 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone genautocomplete

    Output completion script for a given shell.

    -

    Synopsis

    -

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    +

    Synopsis

    +

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    Options

      -h, --help   help for genautocomplete

    See the global flags page for global options not listed here.

    @@ -1547,7 +1538,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone genautocomplete bash

    Output bash completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a bash shell autocompletion script for rclone.

    This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete bash
    @@ -1565,7 +1556,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone genautocomplete fish

    Output fish completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a fish autocompletion script for rclone.

    This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete fish
    @@ -1583,7 +1574,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone genautocomplete zsh

    Output zsh completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a zsh autocompletion script for rclone.

    This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.

    sudo rclone genautocomplete zsh
    @@ -1601,7 +1592,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    rclone gendocs output_directory [flags]

    Options

    @@ -1613,9 +1604,10 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone hashsum

    Produces a hashsum file for all the objects in the path.

    -

    Synopsis

    +

    Synopsis

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.

    +

    For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.

    This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).

    Run without a hash to see the list of all supported hashes, e.g.

    $ rclone hashsum
    @@ -1626,6 +1618,7 @@ Supported hashes are:
       * crc32
       * sha256
       * dropbox
    +  * hidrive
       * mailru
       * quickxor

    Then

    @@ -1645,7 +1638,7 @@ Supported hashes are:

    rclone link

    Generate public link to file/folder.

    -

    Synopsis

    +

    Synopsis

    rclone link will create, retrieve or remove a public link to the given file or folder.

    rclone link remote:path/to/file
     rclone link remote:path/to/folder/
    @@ -1666,9 +1659,9 @@ rclone link --expire 1d remote:path/to/file

    rclone listremotes

    List all the remotes in the config file.

    -

    Synopsis

    +

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    -

    When uses with the -l flag it lists the types too.

    +

    When used with the --long flag it lists the types too.

    rclone listremotes [flags]

    Options

      -h, --help   help for listremotes
    @@ -1680,7 +1673,7 @@ rclone link --expire 1d remote:path/to/file

    rclone lsf

    List directories and objects in remote:path formatted for parsing.

    -

    Synopsis

    +

    Synopsis

    List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

    Eg

    $ rclone lsf swift:bucket
    @@ -1689,7 +1682,7 @@ canole
     diwogej7
     ferejej3gux/
     fubuwic
    -

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    +

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    p - path
     s - size
     t - modification time
    @@ -1698,8 +1691,9 @@ i - ID of object
     o - Original ID of underlying object
     m - MimeType of object if known
     e - encrypted name
    -T - tier of storage if known, e.g. "Hot" or "Cool"
    -

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    +T - tier of storage if known, e.g. "Hot" or "Cool" +M - Metadata of object in JSON blob format, eg {"key":"value"} +

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    Eg

    $ rclone lsf  --format "tsp" swift:bucket
     2016-06-25 18:55:41;60295;bevajer5jef
    @@ -1707,7 +1701,7 @@ T - tier of storage if known, e.g. "Hot" or "Cool"

    -

    If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

    +

    If you specify "h" in the format you will get the MD5 hash by default, use the --hash flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

    For example, to emulate the md5sum command you can use

    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .

    Eg

    @@ -1718,7 +1712,7 @@ cd65ac234e6fea5925974a51cdd865cc canole 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku

    (Though "rclone md5sum ." is an easier way of typing this.)

    -

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    +

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    Eg

    $ rclone lsf  --separator "," --format "tshp" swift:bucket
     2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
    @@ -1732,7 +1726,7 @@ cd65ac234e6fea5925974a51cdd865cc  canole
     test.log,22355
     test.sh,449
     "this file contains a comma, in the file name.txt",6
    -

    Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.

    +

    Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.

    For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure):

    rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
     rclone copy --files-from-raw new_files /path/to/local remote:path
    @@ -1768,18 +1762,37 @@ rclone copy --files-from-raw new_files /path/to/local remote:path

    rclone lsjson

    List directories and objects in the path in JSON format.

    -

    Synopsis

    +

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this

    -

    { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }

    -

    If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.

    -

    If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift).

    -

    If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift).

    -

    If --encrypted is not specified the Encrypted won't be emitted.

    -

    If --dirs-only is not specified files in addition to directories are returned

    -

    If --files-only is not specified directories in addition to the files will be returned.

    -

    if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.

    -

    The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.

    +
    {
    +  "Hashes" : {
    +     "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
    +     "MD5" : "b1946ac92492d2347c6235b4d2611184",
    +     "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
    +  },
    +  "ID": "y2djkhiujf83u33",
    +  "OrigID": "UYOJVTUW00Q1RzTDA",
    +  "IsBucket" : false,
    +  "IsDir" : false,
    +  "MimeType" : "application/octet-stream",
    +  "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
    +  "Name" : "file.txt",
    +  "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
    +  "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
    +  "Path" : "full/path/goes/here/file.txt",
    +  "Size" : 6,
    +  "Tier" : "hot",
    +}
    +

    If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.

    +

    If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift).

    +

    If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift).

    +

    If --encrypted is not specified the Encrypted won't be emitted.

    +

    If --dirs-only is not specified files in addition to directories are returned

    +

    If --files-only is not specified directories in addition to the files will be returned.

    +

    If --metadata is set then an additional Metadata key will be returned. This will have metdata in rclone standard format as a JSON object.

    +

    if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.

    +

    The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.

    If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".

    The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits will be shown ("2017-05-31T16:15:57+01:00").

    The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

    @@ -1799,7 +1812,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
    rclone lsjson remote:path [flags]

    Options

          --dirs-only               Show only directories in the listing
    -  -M, --encrypted               Show the encrypted names
    +      --encrypted               Show the encrypted names
           --files-only              Show only files in the listing
           --hash                    Include hashes in the output (may take longer)
           --hash-type stringArray   Show only this hash type (may be repeated)
    @@ -1816,7 +1829,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path

    rclone mount

    Mount the remote as file system on a mountpoint.

    -

    Synopsis

    +

    Synopsis

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.

    @@ -1839,7 +1852,7 @@ umount /path/to/local/mount

    The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.

    Installing on Windows

    To run rclone mount on Windows, you will need to download and install WinFsp.

    -

    WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.

    +

    WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.

    Mounting modes on windows

    Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives.

    In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead.

    @@ -1873,7 +1886,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote

    Drives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.

    If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt.

    To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections that can be enabled.

    -

    It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config option. Read more in the install documentation.

    +

    It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config option. Read more in the install documentation.

    Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.

    Limitations

    Without the use of --vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File Caching section for more info.

    @@ -1936,7 +1949,7 @@ WantedBy=multi-user.target

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -1999,6 +2012,19 @@ WantedBy=multi-user.target

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -2013,20 +2039,23 @@ WantedBy=multi-user.target
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -2058,7 +2087,7 @@ WantedBy=multi-user.target --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -2066,6 +2095,8 @@ WantedBy=multi-user.target --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -2082,9 +2113,9 @@ WantedBy=multi-user.target

    rclone moveto

    Move file or directory from source to dest.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it moves it to a file or directory named dest:path.

    -

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.

    +

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.

    So

    rclone moveto src dst

    where src and dst are rclone paths, either remote:path or /path/to/local or C:.

    @@ -2107,10 +2138,10 @@ if src is directory

    rclone ncdu

    Explore a remote with a text based user interface.

    -

    Synopsis

    +

    Synopsis

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    -

    Here are the keys - press '?' to toggle the help on and off

    +

    You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:

     ↑,↓ or k,j to Move
      →,l to enter
      ←,h to return
    @@ -2120,13 +2151,28 @@ if src is directory
      u toggle human-readable format
      n,s,C,A sort by name,size,count,average size
      d delete file/directory
    + v select file/directory
    + V enter visual select mode
    + D delete selected files/directories
      y copy current path to clipboard
      Y display current path
    - ^L refresh screen
    + ^L refresh screen (fix screen corruption)
      ? to toggle help on and off
    - q/ESC/c-C to quit
    + q/ESC/^c to quit +

    Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackes at end of line. These flags have the following meaning:

    +
    e means this is an empty directory, i.e. contains no files (but
    +  may contain empty subdirectories)
    +~ means this is a directory where some of the files (possibly in
    +  subdirectories) have unknown size, and therefore the directory
    +  size may be underestimated (and average size inaccurate, as it
    +  is average of the files with known sizes).
    +. means an error occurred while reading a subdirectory, and
    +  therefore the directory size may be underestimated (and average
    +  size inaccurate)
    +! means an error occurred while reading this directory

    This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.

    -

    Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.

    +

    Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously.

    +

    For a non-interactive listing of the remote, see the tree command. To just get the total size of the remote you can also use the size command.

    rclone ncdu remote:path [flags]

    Options

      -h, --help   help for ncdu
    @@ -2137,11 +2183,11 @@ if src is directory

    rclone obscure

    Obscure password for use in the rclone config file.

    -

    Synopsis

    +

    Synopsis

    In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.

    Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.

    This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.

    -

    echo "secretpassword" | rclone obscure -

    +
    echo "secretpassword" | rclone obscure -

    If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

    If you want to encrypt the config file then please use config file encryption - see rclone config for more info.

    rclone obscure password [flags]
    @@ -2154,24 +2200,24 @@ if src is directory

    rclone rc

    Run a command against a running rclone.

    -

    Synopsis

    -

    This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

    -

    A username and password can be passed in with --user and --pass.

    -

    Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

    +

    Synopsis

    +

    This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

    +

    A username and password can be passed in with --user and --pass.

    +

    Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

    Arguments should be passed in as parameter=value.

    The result will be returned as a JSON object by default.

    -

    The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

    -

    The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.

    +

    The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

    +

    The -o/--opt option can be used to set a key "opt" with key, value options in the form -o key=value or -o key. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.

    -o key=value -o key2

    Will place this in the "opt" value

    {"key":"value", "key2","")
    -

    The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.

    +

    The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.

    -a value -a value2

    Will place this in the "arg" value

    ["value", "value2"]
    -

    Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, e.g.:

    +

    Use --loopback to connect to the rclone instance running rclone rc. This is very useful for testing commands without having to run an rclone rc server, e.g.:

    rclone rc --loopback operations/about fs=/
    -

    Use "rclone rc" to see a list of all possible commands.

    +

    Use rclone rc to see a list of all possible commands.

    rclone rc commands parameter [flags]

    Options

      -a, --arg stringArray   Argument placed in the "arg" array
    @@ -2190,14 +2236,14 @@ if src is directory
     
     

    rclone rcat

    Copies standard input to file on remote.

    -

    Synopsis

    +

    Synopsis

    rclone rcat reads from standard input (stdin) and copies it to a single remote file.

    echo "hello world" | rclone rcat remote:path/to/file
     ffmpeg - | rclone rcat remote:path/to/file

    If the remote file already exists, it will be overwritten.

    rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

    -

    Use the |--size| flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.

    -

    |--size| should be the exact size of the input stream in bytes. If the size of the stream is different in length to the |--size| passed in then the transfer will likely fail.

    +

    Use the --size flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.

    +

    --size should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size passed in then the transfer will likely fail.

    Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

    rclone rcat remote:path [flags]

    Options

    @@ -2210,7 +2256,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone rcd

    Run rclone listening to remote control commands only.

    -

    Synopsis

    +

    Synopsis

    This runs rclone so that it only listens to remote control commands.

    This is useful if you are controlling rclone via the rc API.

    If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.

    @@ -2225,11 +2271,11 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone rmdirs

    Remove empty directories under the path.

    -

    Synopsis

    +

    Synopsis

    This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root flag.

    -

    Use command rmdir to delete just the empty directory given by path, not recurse.

    -

    This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs).

    -

    To delete a path and any objects in it, use purge command.

    +

    Use command rmdir to delete just the empty directory given by path, not recurse.

    +

    This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs).

    +

    To delete a path and any objects in it, use purge command.

    rclone rmdirs remote:path [flags]

    Options

      -h, --help         help for rmdirs
    @@ -2241,7 +2287,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone selfupdate

    Update the rclone binary.

    -

    Synopsis

    +

    Synopsis

    This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.

    If used without flags (or with implied --stable flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta flag, i.e. rclone selfupdate --beta. You can check in advance what version would be installed by adding the --check flag, then repeat the command without it when you are satisfied.

    Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER (for example 1.53), the latest matching micro version will be used.

    @@ -2266,8 +2312,8 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone serve

    Serve a remote over a protocol.

    -

    Synopsis

    -

    rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, e.g.

    +

    Synopsis

    +

    Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.

    rclone serve http remote:

    Each subcommand has its own options which you can see in their help.

    rclone serve <protocol> [opts] <remote> [flags]
    @@ -2283,12 +2329,12 @@ ffmpeg - | rclone rcat remote:path/to/file
  • rclone serve http - Serve the remote over HTTP.
  • rclone serve restic - Serve the remote for restic's REST API.
  • rclone serve sftp - Serve the remote over SFTP.
  • -
  • rclone serve webdav - Serve remote:path over webdav.
  • +
  • rclone serve webdav - Serve remote:path over WebDAV.
  • rclone serve dlna

    Serve remote:path over DLNA

    -

    Synopsis

    -

    rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

    +

    Synopsis

    +

    Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

    Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.

    Server options

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.

    @@ -2299,7 +2345,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -2362,6 +2408,19 @@ ffmpeg - | rclone rcat remote:path/to/file

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -2376,20 +2435,23 @@ ffmpeg - | rclone rcat remote:path/to/file
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -2407,7 +2469,7 @@ ffmpeg - | rclone rcat remote:path/to/file --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -2415,6 +2477,8 @@ ffmpeg - | rclone rcat remote:path/to/file --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -2429,7 +2493,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone serve docker

    Serve any remote on docker's volume plugin API.

    -

    Synopsis

    +

    Synopsis

    This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.

    To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:

    sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
    @@ -2442,7 +2506,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -2505,6 +2569,19 @@ ffmpeg - | rclone rcat remote:path/to/file

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -2519,20 +2596,23 @@ ffmpeg - | rclone rcat remote:path/to/file
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -2567,7 +2647,7 @@ ffmpeg - | rclone rcat remote:path/to/file --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock) --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) @@ -2577,6 +2657,8 @@ ffmpeg - | rclone rcat remote:path/to/file --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -2593,8 +2675,8 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone serve ftp

    Serve remote:path over FTP.

    -

    Synopsis

    -

    rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.

    +

    Synopsis

    +

    Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.

    Server options

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    @@ -2606,7 +2688,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -2669,6 +2751,19 @@ ffmpeg - | rclone rcat remote:path/to/file

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -2683,20 +2778,23 @@ ffmpeg - | rclone rcat remote:path/to/file
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -2748,7 +2846,7 @@ ffmpeg - | rclone rcat remote:path/to/file --passive-port string Passive port range to use (default "30000-32000") --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) --public-ip string Public IP address to advertise for passive connections - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") @@ -2757,6 +2855,8 @@ ffmpeg - | rclone rcat remote:path/to/file --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -2771,22 +2871,22 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone serve http

    Serve the remote over HTTP.

    -

    Synopsis

    -

    rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    -

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    -

    The server will log errors. Use -v to see access logs.

    -

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    +

    Synopsis

    +

    Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    +

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    +

    The server will log errors. Use -v to see access logs.

    +

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    -

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    +

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    Template

    -

    --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

    +

    --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    @@ -2867,21 +2967,21 @@ ffmpeg - | rclone rcat remote:path/to/file

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    +

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    -

    Use --salt to change the password hashing salt from the default.

    +

    Use --realm to set the authentication realm.

    +

    Use --salt to change the password hashing salt from the default.

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -2944,6 +3044,19 @@ htpasswd -B htpasswd anotherUser

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -2958,20 +3071,23 @@ htpasswd -B htpasswd anotherUser
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -2994,7 +3110,7 @@ htpasswd -B htpasswd anotherUser --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication --salt string Password hashing salt (default "dlPL2MqE") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) @@ -3008,6 +3124,8 @@ htpasswd -B htpasswd anotherUser --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3022,20 +3140,20 @@ htpasswd -B htpasswd anotherUser

    rclone serve restic

    Serve the remote for restic's REST API.

    -

    Synopsis

    -

    rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    +

    Synopsis

    +

    Run a basic web server to serve a remove over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    Restic is a command-line program for doing backups.

    The server will log errors. Use -v to see access logs.

    -

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    +

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    Setting up rclone for use by restic

    First set up a remote for your chosen cloud provider.

    Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.

    Now start the rclone restic server

    rclone serve restic -v remote:backup

    Where you can replace "backup" in the above by whatever path in the remote you wish to use.

    -

    By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.

    +

    By default this will serve on "localhost:8080" you can change this with use of the --addr flag.

    You might wish to start this server on boot.

    -

    Adding --cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.

    +

    Adding --cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.

    Setting up restic to use rclone

    Now you can follow the restic instructions on setting up restic.

    Note that you will need restic 0.8.2 or later to interoperate with rclone.

    @@ -3062,14 +3180,14 @@ snapshot 45c8fdd8 saved $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff

    Private repositories

    -

    The "--private-repos" flag can be used to limit users to repositories starting with a path of /<username>/.

    +

    The--private-repos flag can be used to limit users to repositories starting with a path of /<username>/.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    -

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    -

    --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

    +

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    @@ -3150,17 +3268,17 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    +

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    +

    Use --realm to set the authentication realm.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    +

    --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    rclone serve restic remote:path [flags]

    Options

          --addr string                     IPaddress:Port or :Port to bind server to (default "localhost:8080")
    @@ -3188,26 +3306,26 @@ htpasswd -B htpasswd anotherUser

    rclone serve sftp

    Serve the remote over SFTP.

    -

    Synopsis

    -

    rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

    -

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    -

    The server will log errors. Use -v to see access logs.

    -

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    -

    You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.

    +

    Synopsis

    +

    Run a SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

    +

    You can use the filter flags (e.g. --include, --exclude) to control what is served.

    +

    The server will log errors. Use -v to see access logs.

    +

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    +

    You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.

    Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.

    -

    If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see "rclone help flags cache-dir") in the "serve-sftp" directory.

    -

    By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.

    -

    Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.

    -

    If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:

    +

    If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir) in the "serve-sftp" directory.

    +

    By default the server binds to localhost:2022 - if you want it to be reachable externally then supply --addr :2022 for example.

    +

    Note that the default of --vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with other SFTP clients.

    +

    If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:

    restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
    -

    On the client you need to set "--transfers 1" when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.

    -

    The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using --sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.

    +

    On the client you need to set --transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.

    +

    The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using --sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -3270,6 +3388,19 @@ htpasswd -B htpasswd anotherUser

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -3284,20 +3415,23 @@ htpasswd -B htpasswd anotherUser
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -3348,7 +3482,7 @@ htpasswd -B htpasswd anotherUser --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --stdio Run an sftp server on run stdin/stdout --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) @@ -3358,6 +3492,8 @@ htpasswd -B htpasswd anotherUser --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3371,21 +3507,20 @@ htpasswd -B htpasswd anotherUser
  • rclone serve - Serve a remote over a protocol.
  • rclone serve webdav

    -

    Serve remote:path over webdav.

    -

    Synopsis

    -

    rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.

    -

    Webdav options

    +

    Serve remote:path over WebDAV.

    +

    Synopsis

    +

    Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.

    +

    WebDAV options

    --etag-hash

    This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.

    -

    If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".

    -

    Use "rclone hashsum" to see the full list.

    +

    If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use the hashsum command to see the full list.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    -

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    -

    --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:

    +

    Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    @@ -3466,23 +3601,23 @@ htpasswd -B htpasswd anotherUser

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    +

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    +

    Use --realm to set the authentication realm.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    +

    --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

    The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

    VFS Directory Cache

    -

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.

    +

    Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
     --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

    @@ -3545,6 +3680,19 @@ htpasswd -B htpasswd anotherUser

    When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

    When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

    IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

    +

    Fingerprinting

    +

    Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

    + +

    where available on an object.

    +

    On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

    +

    For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

    +

    If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

    +

    If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

    +

    Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

    VFS Chunked Reading

    When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    @@ -3559,20 +3707,23 @@ htpasswd -B htpasswd anotherUser
    --no-checksum     Don't compare checksums on up/download.
     --no-modtime      Don't read/write the modification time (can speed things up).
     --no-seek         Don't allow seeking in files.
    ---read-only       Mount read-only.
    +--read-only Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
     --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    -

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).

    +

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    --transfers int  Number of file transfers to run in parallel (default 4)

    VFS Case Sensitivity

    Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

    File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

    Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

    -

    The --vfs-case-insensitive mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.

    -

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.

    -

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    +

    The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

    +

    The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

    +

    Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

    If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

    +

    VFS Disk Options

    +

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    +
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

    @@ -3628,7 +3779,7 @@ htpasswd -B htpasswd anotherUser --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) @@ -3641,6 +3792,8 @@ htpasswd -B htpasswd anotherUser --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3655,7 +3808,7 @@ htpasswd -B htpasswd anotherUser

    rclone settier

    Changes storage class/tier of objects in remote.

    -

    Synopsis

    +

    Synopsis

    rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.

    Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

    You can use it to tier single object

    @@ -3674,7 +3827,7 @@ htpasswd -B htpasswd anotherUser

    rclone test

    Run a test command

    -

    Synopsis

    +

    Synopsis

    Rclone test is used to run test commands.

    Select which test comand you want with the subcommand, eg

    rclone test memory remote:
    @@ -3689,6 +3842,7 @@ htpasswd -B htpasswd anotherUser
  • rclone test changenotify - Log any change notify requests for the remote passed in.
  • rclone test histogram - Makes a histogram of file name characters.
  • rclone test info - Discovers file name or other limitations for paths.
  • +
  • rclone test makefile - Make files with random contents of the size given
  • rclone test makefiles - Make a random file hierarchy in a directory
  • rclone test memory - Load all the objects at remote:path into memory and report memory stats.
  • @@ -3705,7 +3859,7 @@ htpasswd -B htpasswd anotherUser

    rclone test histogram

    Makes a histogram of file name characters.

    -

    Synopsis

    +

    Synopsis

    This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.

    The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.

    rclone test histogram [remote:path] [flags]
    @@ -3718,7 +3872,7 @@ htpasswd -B htpasswd anotherUser

    rclone test info

    Discovers file name or other limitations for paths.

    -

    Synopsis

    +

    Synopsis

    rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.

    NB this can create undeletable files and other hazards - use with care

    rclone test info [remote:path]+ [flags]
    @@ -3736,36 +3890,57 @@ htpasswd -B htpasswd anotherUser +

    rclone test makefile

    +

    Make files with random contents of the size given

    +
    rclone test makefile <size> [<file>]+ [flags]
    +

    Options

    +
          --ascii      Fill files with random ASCII printable bytes only
    +      --chargen    Fill files with a ASCII chargen pattern
    +  -h, --help       help for makefile
    +      --pattern    Fill files with a periodic pattern
    +      --seed int   Seed for the random number generator (0 for random) (default 1)
    +      --sparse     Make the files sparse (appear to be filled with ASCII 0x00)
    +      --zero       Fill files with ASCII 0x00
    +

    See the global flags page for global options not listed here.

    +

    SEE ALSO

    +

    rclone test makefiles

    Make a random file hierarchy in a directory

    rclone test makefiles <dir> [flags]
    -

    Options

    -
          --files int                  Number of files to create (default 1000)
    +

    Options

    +
          --ascii                      Fill files with random ASCII printable bytes only
    +      --chargen                    Fill files with a ASCII chargen pattern
    +      --files int                  Number of files to create (default 1000)
           --files-per-directory int    Average number of files per directory (default 10)
       -h, --help                       help for makefiles
           --max-file-size SizeSuffix   Maximum size of files to create (default 100)
           --max-name-length int        Maximum size of file names (default 12)
           --min-file-size SizeSuffix   Minimum size of file to create
           --min-name-length int        Minimum size of file names (default 4)
    -      --seed int                   Seed for the random number generator (0 for random) (default 1)
    + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone test memory

    Load all the objects at remote:path into memory and report memory stats.

    rclone test memory remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for memory

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone touch

    Create new file or change file modification time.

    -

    Synopsis

    +

    Synopsis

    Set the modification time on file(s) as specified by remote:path to have the current time.

    If remote:path does not exist then a zero sized file will be created, unless --no-create or --recursive is provided.

    If --recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive flag.

    @@ -3777,20 +3952,20 @@ htpasswd -B htpasswd anotherUser

    Note that value of --timestamp is in UTC. If you want local time then add the --localtime flag.

    rclone touch remote:path [flags]
    -

    Options

    +

    Options

      -h, --help               help for touch
           --localtime          Use localtime for timestamp, not UTC
       -C, --no-create          Do not create the file if it does not exist (implied with --recursive)
       -R, --recursive          Recursively touch all files
       -t, --timestamp string   Use specified time instead of the current time of day

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    rclone tree

    List the contents of the remote in a tree like fashion.

    -

    Synopsis

    +

    Synopsis

    rclone tree lists the contents of a remote in a similar way to the unix tree command.

    For example

    $ rclone tree remote:path
    @@ -3803,10 +3978,11 @@ htpasswd -B htpasswd anotherUser
    └── file5 1 directories, 5 files -

    You can use any of the filtering options with the tree command (e.g. --include and --exclude). You can also use --fast-list.

    -

    The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

    +

    You can use any of the filtering options with the tree command (e.g. --include and --exclude. You can also use --fast-list.

    +

    The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with --size. Note that not all of them have short options as they conflict with rclone's short options.

    +

    For a more interactive navigation of the remote see the ncdu command.

    rclone tree remote:path [flags]
    -

    Options

    +

    Options

      -a, --all             All files are listed (list . files too)
       -C, --color           Turn colorization on always
       -d, --dirs-only       List directories only
    @@ -3828,7 +4004,7 @@ htpasswd -B htpasswd anotherUser
    -U, --unsorted Leave files unsorted --version Sort files alphanumerically by version

    See the global flags page for global options not listed here.

    -

    SEE ALSO

    +

    SEE ALSO

    @@ -3940,7 +4116,119 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dir

    This can be used when scripting to make aged backups efficiently, e.g.

    rclone sync -i remote:current-backup remote:previous-backup
     rclone sync -i /path/to/files remote:current-backup
    -

    Options

    +

    Metadata support

    +

    Metadata is data about a file which isn't the contents of the file. Normally rclone only preserves the modification time and the content (MIME) type where possible.

    +

    Rclone supports preserving all the available metadata on files (not directories) when using the --metadata or -M flag.

    +

    Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the features table (Eg local, s3)

    +

    Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be re-uploaded. If the metadata subsequently changes on the source object without changing the object itself then it won't be synced to the destination object. This is in line with the way rclone syncs Content-Type without the --metadata flag.

    +

    Using --metadata when syncing from local to local will preserve file attributes such as file mode, owner, extended attributes (not Windows).

    +

    Note that arbitrary metadata may be added to objects using the --metadata-set key=value flag when the object is first uploaded. This flag can be repeated as many times as necessary.

    +

    Types of metadata

    +

    Metadata is divided into two type. System metadata and User metadata.

    +

    Metadata which the backend uses itself is called system metadata. For example on the local backend the system metadata uid will store the user ID of the file when used on a unix based platform.

    +

    Arbitrary metadata is called user metadata and this can be set however is desired.

    +

    When objects are copied from backend to backend, they will attempt to interpret system metadata if it is supplied. Metadata may change from being user metadata to system metadata as objects are copied between different backends. For example copying an object from s3 sets the content-type metadata. In a backend which understands this (like azureblob) this will become the Content-Type of the object. In a backend which doesn't understand this (like the local backend) this will become user metadata. However should the local object be copied back to s3, the Content-Type will be set correctly.

    +

    Metadata framework

    +

    Rclone implements a metadata framework which can read metadata from an object and write it to the object when (and only when) it is being uploaded.

    +

    This metadata is stored as a dictionary with string keys and string values.

    +

    There are some limits on the names of the keys (these may be clarified further in the future).

    + +

    Each backend can provide system metadata that it understands. Some backends can also store arbitrary user metadata.

    +

    Where possible the key names are standardized, so, for example, it is possible to copy object metadata from s3 to azureblob for example and metadata will be translated apropriately.

    +

    Some backends have limits on the size of the metadata and rclone will give errors on upload if they are exceeded.

    +

    Metadata preservation

    +

    The goal of the implementation is to

    +
      +
    1. Preserve metadata if at all possible
    2. +
    3. Interpret metadata if at all possible
    4. +
    +

    The consequences of 1 is that you can copy an S3 object to a local disk then back to S3 losslessly. Likewise you can copy a local file with file attributes and xattrs from local disk to s3 and back again losslessly.

    +

    The consequence of 2 is that you can copy an S3 object with metadata to Azureblob (say) and have the metadata appear on the Azureblob object also.

    +

    Standard system metadata

    +

    Here is a table of standard system metadata which, if appropriate, a backend may implement.

    + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    keydescriptionexample
    modeFile type and mode: octal, unix style0100664
    uidUser ID of owner: decimal number500
    gidGroup ID of owner: decimal number500
    rdevDevice ID (if special file) => hexadecimal0
    atimeTime of last access: RFC 33392006-01-02T15:04:05.999999999Z07:00
    mtimeTime of last modification: RFC 33392006-01-02T15:04:05.999999999Z07:00
    btimeTime of file creation (birth): RFC 33392006-01-02T15:04:05.999999999Z07:00
    cache-controlCache-Control headerno-cache
    content-dispositionContent-Disposition headerinline
    content-encodingContent-Encoding headergzip
    content-languageContent-Language headeren-US
    content-typeContent-Type headertext/plain
    +

    The metadata keys mtime and content-type will take precedence if supplied in the metadata over reading the Content-Type or modification time of the source object.

    +

    Hashes are not included in system metadata as there is a well defined way of reading those already.

    +

    Options

    Rclone has a number of options to control its behaviour.

    Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

    @@ -4010,8 +4298,9 @@ rclone sync -i /path/to/files remote:current-backup

    It can also be useful to ensure perfect ordering when using --order-by.

    Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.

    --checkers=N

    -

    The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

    -

    The default is to run 8 checkers in parallel.

    +

    Originally controlling just the number of file checkers to run in parallel, e.g. by rclone copy. Now a fairly universal parallelism control used by rclone in several places.

    +

    Note: checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

    +

    The default is to run 8 checkers in parallel. However, in case of slow-reacting backends you may need to lower (rather than increase) this default by setting --checkers to 4 or less threads. This is especially advised if you are experiencing backend server crashes during file checking phase (e.g. on subsequent or top-up backups where little or no file copying is done and checking takes up most of the time). Increase this setting only with utmost care, while monitoring your server health and file checking throughput.

    -c, --checksum

    Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

    This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.

    @@ -4068,7 +4357,8 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

    The remote in use must support server-side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.

    See --compare-dest and --backup-dir.

    --dedupe-mode MODE

    -

    Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.

    +

    Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive.
    +See the dedupe command for more information as to what these options mean.

    --disable FEATURE,FEATURE,...

    This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use:

    --disable move,copy
    @@ -4118,11 +4408,11 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

    --human-readable

    Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of files) either as raw numbers, or in human-readable format.

    In human-readable format the values are scaled to larger units, indicated with a suffix shown after the value, and rounded to three decimals. Rclone consistently uses binary units (powers of 2) for sizes and decimal units (powers of 10) for counts. The unit prefix for size is according to IEC standard notation, e.g. Ki for kibi. Used with byte unit, 1 KiB means 1024 Byte. In list type of output, only the unit prefix appended to the value (e.g. 9.762Ki), while in more textual output the full unit is shown (e.g. 9.762 KiB). For counts the SI standard notation is used, e.g. prefix k for kilo. Used with file counts, 1k means 1000 files.

    -

    The various list commands output raw numbers by default. Option --human-readable will make them output values in human-readable format instead (with the short unit prefix).

    -

    The about command outputs human-readable by default, with a command-specific option --full to output the raw numbers instead.

    -

    Command size outputs both human-readable and raw numbers in the same output.

    -

    The tree command also considers --human-readable, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K instead of Ki. The reason for this is that it relies on an external library.

    -

    The interactive command ncdu shows human-readable by default, and responds to key u for toggling human-readable format.

    +

    The various list commands output raw numbers by default. Option --human-readable will make them output values in human-readable format instead (with the short unit prefix).

    +

    The about command outputs human-readable by default, with a command-specific option --full to output the raw numbers instead.

    +

    Command size outputs both human-readable and raw numbers in the same output.

    +

    The tree command also considers --human-readable, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K instead of Ki. The reason for this is that it relies on an external library.

    +

    The interactive command ncdu shows human-readable by default, and responds to key u for toggling human-readable format.

    --ignore-case-sync

    Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.

    --ignore-checksum

    @@ -4208,6 +4498,10 @@ y/n/s/!/q> n

    Rclone will stop transferring when it has reached the size specified. Defaults to off.

    When the limit is reached all transfers will stop immediately.

    Rclone will exit with exit code 8 if the transfer limit is reached.

    +

    --metadata / -M

    +

    Setting this flag enables rclone to copy the metadata from the source to the destination. For local backends this is ownership, permissions, xattr etc. See the #metadata for more info.

    +

    --metadata-set key=value

    +

    Add metadata key = value when uploading. This can be repeated as many times as required. See the #metadata for more info.

    --cutoff-mode=hard|soft|cautious

    This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

    Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

    @@ -4434,6 +4728,7 @@ y/n/s/!/q> n

    --transfers=N

    The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.

    The default is to run 4 file transfers in parallel.

    +

    Look at --multi-thread-streams if you would like to control single file transfers.

    -u, --update

    This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.

    This can be useful in avoiding needless transfers when transferring to a remote which doesn't support modification times directly (or when using --use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum. On such remotes (or when using --use-server-modtime) the time checked will be the uploaded time.

    @@ -4452,6 +4747,7 @@ y/n/s/!/q> n

    -v, -vv, --verbose

    With -v rclone will tell you about each file that is transferred and a small number of significant events.

    With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

    +

    When setting verbosity as an environment variable, use RCLONE_VERBOSE=1 or RCLONE_VERBOSE=2 for -v and -vv respectively.

    -V, --version

    Prints the version number

    SSL/TLS options

    @@ -4552,6 +4848,7 @@ export RCLONE_CONFIG_PASS
  • --filter-from
  • --exclude
  • --exclude-from
  • +
  • --exclude-if-present
  • --include
  • --include-from
  • --files-from
  • @@ -4600,11 +4897,12 @@ export RCLONE_CONFIG_PASS

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true.

    +

    Verbosity is slightly different, the environment variable equivalent of --verbose or -v is RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2.

    The same parser is used for the options and the environment variables so they take exactly the same form.

    The options set by environment variables can be seen with the -vv flag, e.g. rclone version -vv.

    Config file

    @@ -4714,6 +5012,19 @@ y/e/d> Configuration file is stored at: /home/user/.rclone.conf

    Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use rclone config file on the remote box to find out where).

    +

    Configuring using SSH Tunnel

    +

    Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command:

    +
    ssh -L localhost:53682:localhost:53682 username@remote_server
    +

    Then on the headless box run rclone config and answer Y to the Use auto config? question.

    +
    ...
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes (default)
    +n) No
    +y/n> y
    +

    Then copy and paste the auth url http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the browser on your local machine, complete the auth and it is done.

    Filtering, includes and excludes

    Filter flags determine which files rclone sync, move, ls, lsl, md5sum, sha1sum, size, delete, check and similar commands apply to.

    They are specified in terms of path/file name patterns; path/file lists; file age and size, or presence of a file in a directory. Bucket based remotes without the concept of directory apply filters to object key, age and size in an analogous way.

    @@ -4780,7 +5091,7 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:] - matches "POTATO"

    Using regular expressions in filter patterns

    The syntax of filter patterns is glob style matching (like bash uses) to make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax.

    -

    The regular expressions used are as defined in the Go regular expression reference. Regular expressions should be enclosed in {{ }}. They will match only the last path segment if the glob doesn't start with / or the whole path name if it does.

    +

    The regular expressions used are as defined in the Go regular expression reference. Regular expressions should be enclosed in {{ }}. They will match only the last path segment if the glob doesn't start with / or the whole path name if it does. Note that rclone does not attempt to parse the supplied regular expression, meaning that using any regular expression filter will prevent rclone from using directory filter rules, as it will instead check every path against the supplied regular expression(s).

    Here is how the {{regexp}} is transformed into an full regular expression to match the entire path:

    {{regexp}}  becomes (^|/)(regexp)$
     /{{regexp}} becomes ^(regexp)$
    @@ -4949,14 +5260,15 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]

    Any path/file included at that stage is processed by the rclone command.

    --files-from and --files-from-raw flags over-ride and cannot be combined with other filter options.

    To see the internal combined rule list, in regular expression form, for a command add the --dump filters flag. Running an rclone command with --dump filters and -vv flags lists the internal filter elements and shows how they are applied to each source path/file. There is not currently a means provided to pass regular expression filter options into rclone directly though character class filter rules contain character classes. Go regular expression reference

    -

    How filter rules are applied to directories

    +

    How filter rules are applied to directories

    Rclone commands are applied to path/file names not directories. The entire contents of a directory can be matched to a filter by the pattern directory/* or recursively by directory/**.

    Directory filter rules are defined with a closing / separator.

    E.g. /directory/subdirectory/ is an rclone directory filter rule.

    Rclone commands can use directory filter rules to determine whether they recurse into subdirectories. This potentially optimises access to a remote by avoiding listing unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content.

    +

    If any regular expression filters are in use, then no directory recursion optimisation is possible, as rclone must check every path against the supplied regular expression(s).

    Directory recursion optimisation occurs if either:

    @@ -5157,7 +5469,7 @@ user2/prefect

    Dumps the defined filters to standard output in regular expression format.

    Useful for debugging.

    Exclude directory based on a file

    -

    The --exclude-if-present flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it.

    +

    The --exclude-if-present flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it. The flag can be repeated to check for multiple file names, presence of any of them will exclude the directory.

    This flag has a priority over other filter flags.

    E.g. for the following directory structure:

    dir1/file1
    @@ -5165,7 +5477,6 @@ dir1/dir2/file2
     dir1/dir2/dir3/file3
     dir1/dir2/dir3/.ignore

    The command rclone ls --exclude-if-present .ignore dir1 does not list dir3, file3 or .ignore.

    -

    --exclude-if-present can only be used once in an rclone command.

    Common pitfalls

    The most frequent filter support issues on the rclone forum are:

    -

    See the config create command command for more information on the above.

    +

    See the config create command for more information on the above.

    Authentication is required for this call.

    config/delete: Delete a remote in the config file.

    Parameters:

    -

    See the config delete command command for more information on the above.

    +

    See the config delete command for more information on the above.

    Authentication is required for this call.

    config/dump: Dumps the config file.

    Returns a JSON object: - key: value

    Where keys are remote names and values are the config parameters.

    -

    See the config dump command command for more information on the above.

    +

    See the config dump command for more information on the above.

    Authentication is required for this call.

    config/get: Get a remote in the config file.

    Parameters:

    -

    See the config dump command command for more information on the above.

    +

    See the config dump command for more information on the above.

    Authentication is required for this call.

    config/listremotes: Lists the remotes in the config file.

    Returns - remotes - array of remote names

    -

    See the listremotes command command for more information on the above.

    +

    See the listremotes command for more information on the above.

    Authentication is required for this call.

    config/password: password the config for a remote.

    This takes the following parameters:

    @@ -5547,11 +5863,11 @@ rclone rc cache/expire remote=/ withData=true
  • name - name of remote
  • parameters - a map of { "key": "value" } pairs
  • -

    See the config password command command for more information on the above.

    +

    See the config password command for more information on the above.

    Authentication is required for this call.

    config/providers: Shows how providers are configured in the config file.

    Returns a JSON object: - providers - array of objects

    -

    See the config providers command command for more information on the above.

    +

    See the config providers command for more information on the above.

    Authentication is required for this call.

    config/update: update the config for a remote.

    This takes the following parameters:

    @@ -5569,7 +5885,7 @@ rclone rc cache/expire remote=/ withData=true
  • result - result to restart with - used with continue
  • -

    See the config update command command for more information on the above.

    +

    See the config update command for more information on the above.

    Authentication is required for this call.

    core/bwlimit: Set the bandwidth limit.

    This sets the bandwidth limit to the string passed in. This should be a single bandwidth limit entry or a pair of upload:download bandwidth.

    @@ -5882,14 +6198,14 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • The result is as returned from rclone about --json

    -

    See the about command command for more information on the above.

    +

    See the about command for more information on the above.

    Authentication is required for this call.

    operations/cleanup: Remove trashed files in the remote or path

    This takes the following parameters:

    -

    See the cleanup command command for more information on the above.

    +

    See the cleanup command for more information on the above.

    Authentication is required for this call.

    operations/copyfile: Copy a file from source remote to destination remote

    This takes the following parameters:

    @@ -5906,15 +6222,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • url - string, URL to read from
  • -
  • autoFilename - boolean, set to true to retrieve destination file name from url See the copyurl command command for more information on the above.
  • +
  • autoFilename - boolean, set to true to retrieve destination file name from url
  • +

    See the copyurl command for more information on the above.

    Authentication is required for this call.

    operations/delete: Remove files in the path

    This takes the following parameters:

    -

    See the delete command command for more information on the above.

    +

    See the delete command for more information on the above.

    Authentication is required for this call.

    operations/deletefile: Remove the single file pointed to

    This takes the following parameters:

    @@ -5922,7 +6239,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • -

    See the deletefile command command for more information on the above.

    +

    See the deletefile command for more information on the above.

    Authentication is required for this call.

    operations/fsinfo: Return information about the remote

    This takes the following parameters:

    @@ -5931,46 +6248,103 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache

    This returns info about the remote passed in;

    {
    -    // optional features and whether they are available or not
    -    "Features": {
    -        "About": true,
    -        "BucketBased": false,
    -        "CanHaveEmptyDirectories": true,
    -        "CaseInsensitive": false,
    -        "ChangeNotify": false,
    -        "CleanUp": false,
    -        "Copy": false,
    -        "DirCacheFlush": false,
    -        "DirMove": true,
    -        "DuplicateFiles": false,
    -        "GetTier": false,
    -        "ListR": false,
    -        "MergeDirs": false,
    -        "Move": true,
    -        "OpenWriterAt": true,
    -        "PublicLink": false,
    -        "Purge": true,
    -        "PutStream": true,
    -        "PutUnchecked": false,
    -        "ReadMimeType": false,
    -        "ServerSideAcrossConfigs": false,
    -        "SetTier": false,
    -        "SetWrapper": false,
    -        "UnWrap": false,
    -        "WrapFs": false,
    -        "WriteMimeType": false
    -    },
    -    // Names of hashes available
    -    "Hashes": [
    -        "MD5",
    -        "SHA-1",
    -        "DropboxHash",
    -        "QuickXorHash"
    -    ],
    -    "Name": "local",    // Name as created
    -    "Precision": 1,     // Precision of timestamps in ns
    -    "Root": "/",        // Path as created
    -    "String": "Local file system at /" // how the remote will appear in logs
    +        // optional features and whether they are available or not
    +        "Features": {
    +                "About": true,
    +                "BucketBased": false,
    +                "BucketBasedRootOK": false,
    +                "CanHaveEmptyDirectories": true,
    +                "CaseInsensitive": false,
    +                "ChangeNotify": false,
    +                "CleanUp": false,
    +                "Command": true,
    +                "Copy": false,
    +                "DirCacheFlush": false,
    +                "DirMove": true,
    +                "Disconnect": false,
    +                "DuplicateFiles": false,
    +                "GetTier": false,
    +                "IsLocal": true,
    +                "ListR": false,
    +                "MergeDirs": false,
    +                "MetadataInfo": true,
    +                "Move": true,
    +                "OpenWriterAt": true,
    +                "PublicLink": false,
    +                "Purge": true,
    +                "PutStream": true,
    +                "PutUnchecked": false,
    +                "ReadMetadata": true,
    +                "ReadMimeType": false,
    +                "ServerSideAcrossConfigs": false,
    +                "SetTier": false,
    +                "SetWrapper": false,
    +                "Shutdown": false,
    +                "SlowHash": true,
    +                "SlowModTime": false,
    +                "UnWrap": false,
    +                "UserInfo": false,
    +                "UserMetadata": true,
    +                "WrapFs": false,
    +                "WriteMetadata": true,
    +                "WriteMimeType": false
    +        },
    +        // Names of hashes available
    +        "Hashes": [
    +                "md5",
    +                "sha1",
    +                "whirlpool",
    +                "crc32",
    +                "sha256",
    +                "dropbox",
    +                "mailru",
    +                "quickxor"
    +        ],
    +        "Name": "local",        // Name as created
    +        "Precision": 1,         // Precision of timestamps in ns
    +        "Root": "/",            // Path as created
    +        "String": "Local file system at /", // how the remote will appear in logs
    +        // Information about the system metadata for this backend
    +        "MetadataInfo": {
    +                "System": {
    +                        "atime": {
    +                                "Help": "Time of last access",
    +                                "Type": "RFC 3339",
    +                                "Example": "2006-01-02T15:04:05.999999999Z07:00"
    +                        },
    +                        "btime": {
    +                                "Help": "Time of file birth (creation)",
    +                                "Type": "RFC 3339",
    +                                "Example": "2006-01-02T15:04:05.999999999Z07:00"
    +                        },
    +                        "gid": {
    +                                "Help": "Group ID of owner",
    +                                "Type": "decimal number",
    +                                "Example": "500"
    +                        },
    +                        "mode": {
    +                                "Help": "File type and mode",
    +                                "Type": "octal, unix style",
    +                                "Example": "0100664"
    +                        },
    +                        "mtime": {
    +                                "Help": "Time of last modification",
    +                                "Type": "RFC 3339",
    +                                "Example": "2006-01-02T15:04:05.999999999Z07:00"
    +                        },
    +                        "rdev": {
    +                                "Help": "Device ID (if special file)",
    +                                "Type": "hexadecimal",
    +                                "Example": "1abc"
    +                        },
    +                        "uid": {
    +                                "Help": "User ID of owner",
    +                                "Type": "decimal number",
    +                                "Example": "500"
    +                        }
    +                },
    +                "Help": "Textual help string\n"
    +        }
     }

    This command does not have a command line equivalent so use this instead:

    rclone rc --loopback operations/fsinfo fs=remote:
    @@ -5989,6 +6363,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • noMimeType - If set don't show mime types
  • dirsOnly - If set only show directories
  • filesOnly - If set only show files
  • +
  • metadata - If set return metadata of objects also
  • hashTypes - array of strings of hash types to show if showHash set
  • @@ -5999,7 +6374,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • This is an array of objects as described in the lsjson command
  • -

    See the lsjson command for more information on the above and examples.

    +

    See the lsjson command for more information on the above and examples.

    Authentication is required for this call.

    operations/mkdir: Make a destination directory or container

    This takes the following parameters:

    @@ -6007,7 +6382,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • -

    See the mkdir command command for more information on the above.

    +

    See the mkdir command for more information on the above.

    Authentication is required for this call.

    operations/movefile: Move a file from source remote to destination remote

    This takes the following parameters:

    @@ -6030,7 +6405,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache -

    See the link command command for more information on the above.

    +

    See the link command for more information on the above.

    Authentication is required for this call.

    operations/purge: Remove a directory or container and all of its contents

    This takes the following parameters:

    @@ -6038,7 +6413,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • -

    See the purge command command for more information on the above.

    +

    See the purge command for more information on the above.

    Authentication is required for this call.

    operations/rmdir: Remove an empty directory or container

    This takes the following parameters:

    @@ -6046,15 +6421,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • -

    See the rmdir command command for more information on the above.

    +

    See the rmdir command for more information on the above.

    Authentication is required for this call.

    operations/rmdirs: Remove all the empty directories in the path

    This takes the following parameters:

    +

    See the rmdirs command for more information on the above.

    Authentication is required for this call.

    operations/size: Count the number of bytes and files in remote

    This takes the following parameters:

    @@ -6066,7 +6442,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • count - number of files
  • bytes - number of bytes in those files
  • -

    See the size command command for more information on the above.

    +

    See the size command for more information on the above.

    Authentication is required for this call.

    operations/stat: Give information about the supplied file or directory

    This takes the following parameters

    @@ -6083,15 +6459,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
  • item - an object as described in the lsjson command. Will be null if not found.
  • Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options.

    -

    See the lsjson command for more information on the above and examples.

    +

    See the lsjson command for more information on the above and examples.

    Authentication is required for this call.

    operations/uploadfile: Upload file using multiform/form-data

    This takes the following parameters:

    +

    See the uploadfile command for more information on the above.

    Authentication is required for this call.

    options/blocks: List all the option blocks

    Returns: - options - a list of the options block names

    @@ -6218,7 +6595,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
  • dstFs - a remote name string e.g. "drive:dst" for the destination
  • createEmptySrcDirs - create empty src directories on destination if set
  • -

    See the copy command command for more information on the above.

    +

    See the copy command for more information on the above.

    Authentication is required for this call.

    sync/move: move a directory from source remote to destination remote

    This takes the following parameters:

    @@ -6228,7 +6605,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
  • createEmptySrcDirs - create empty src directories on destination if set
  • deleteEmptySrcDirs - delete empty src directories if set
  • -

    See the move command command for more information on the above.

    +

    See the move command for more information on the above.

    Authentication is required for this call.

    sync/sync: sync a directory from source remote to destination remote

    This takes the following parameters:

    @@ -6237,7 +6614,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
  • dstFs - a remote name string e.g. "drive:dst" for the destination
  • createEmptySrcDirs - create empty src directories on destination if set
  • -

    See the sync command command for more information on the above.

    +

    See the sync command for more information on the above.

    Authentication is required for this call.

    vfs/forget: Forget files or directories in the directory cache.

    This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

    @@ -6428,320 +6805,378 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Case Insensitive Duplicate Files MIME Type +Metadata 1Fichier Whirlpool -No +- No Yes R +- Akamai Netstorage MD5, SHA256 -Yes +R/W No No R +- Amazon Drive MD5 -No +- Yes No R +- Amazon S3 (or S3 compatible) MD5 -Yes +R/W No No R/W +RWU Backblaze B2 SHA1 -Yes +R/W No No R/W +- Box SHA1 -Yes +R/W Yes No - +- Citrix ShareFile MD5 -Yes +R/W Yes No - +- Dropbox DBHASH ¹ -Yes +R Yes No - +- Enterprise File Fabric - -Yes +R/W Yes No R/W +- FTP - +R/W ¹⁰ No No -No +- - Google Cloud Storage MD5 -Yes +R/W No No R/W +- Google Drive MD5 -Yes +R/W No Yes R/W +- Google Photos - -No +- No Yes R +- HDFS - -Yes +R/W No No - +- +HiDrive +HiDrive ¹² +R/W +No +No +- +- + + HTTP - -No +R No No R +- - + Hubic MD5 -Yes +R/W No No R/W +- + + +Internet Archive +MD5, SHA1, CRC32 +R/W ¹¹ +No +No +- +RWU Jottacloud MD5 -Yes +R/W Yes No R +- Koofr MD5 -No +- Yes No - +- Mail.ru Cloud Mailru ⁶ -Yes +R/W Yes No - +- Mega - -No +- No Yes - +- Memory MD5 -Yes +R/W No No - +- Microsoft Azure Blob Storage MD5 -Yes +R/W No No R/W +- Microsoft OneDrive SHA1 ⁵ -Yes +R/W Yes No R +- OpenDrive MD5 -Yes +R/W Yes Partial ⁸ - +- OpenStack Swift MD5 -Yes +R/W No No R/W +- pCloud MD5, SHA1 ⁷ -Yes +R No No W +- premiumize.me - -No +- Yes No R +- put.io CRC-32 -Yes +R/W No Yes R +- QingStor MD5 -No +- ⁹ No No R/W +- Seafile - +- No No -No +- - SFTP MD5, SHA1 ² -Yes +R/W Depends No - +- Sia - +- No No -No +- - SugarSync - +- No No -No +- - Storj - -Yes +R No No - +- Uptobox - -No +- No Yes - +- WebDAV MD5, SHA1 ³ -Yes ⁴ +R ⁴ Depends No - +- Yandex Disk MD5 -Yes +R/W No No R +- Zoho WorkDrive - +- No No -No +- - The local filesystem All -Yes +R/W Depends No - +RWU @@ -6754,12 +7189,18 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    ⁶ Mail.ru uses its own modified SHA1 hash

    ⁷ pCloud only supports SHA1 (not MD5) in its EU region

    ⁸ Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature.

    +

    ⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.

    +

    ¹⁰ FTP supports modtimes for the major FTP servers, and also others if they advertised required protocol extensions. See this for more details.

    +

    ¹¹ Internet Archive requires option wait_archive to be set to a non-zero value for full modtime support.

    +

    ¹² HiDrive supports its own custom hash. It combines SHA1 sums for each 4 KiB block hierarchically to a single top-level sum.

    Hash

    The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

    To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

    ModTime

    -

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    -

    All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

    +

    Allmost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.

    +

    Storage systems with a - in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).

    +

    Storage systems with a R (for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy and sync commands, will automatically check for SetModTime support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime support, e.g. touch command on an existing file will fail, and changes to modification time only on a files in a mount will be silently ignored.

    +

    Storage systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only operations.

    Case Insensitive

    If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, e.g. file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.

    This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.

    @@ -7000,120 +7441,158 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    The --backend-encoding flags allow you to change that. You can disable the encoding completely with --backend-encoding None or set encoding = None in the config file.

    Encoding takes a comma separated list of encodings. You can see the list of all possible values by passing an invalid value to this flag, e.g. --local-encoding "help". The command rclone help flags encoding will show you the defaults for the backends.

    +++++ + + + + + + + + + + + + + - + + + + + + + + + + + + + - - + + + - - + + + + + + + + +
    Encoding CharactersEncoded as
    Asterisk *
    BackQuote `
    BackSlash \
    Colon :
    CrLf CR 0x0D, LF 0x0A,
    Ctl All control characters 0x00-0x1F␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟
    Del DEL 0x7F
    Dollar $
    Dot . or .. as entire string, ..
    DoubleQuote "
    Hash #
    InvalidUtf8 An invalid UTF-8 character (e.g. latin1)
    LeftCrLfHtVtCR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a stringCR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the left of a string, , ,
    LeftPeriod . on the left of a string.
    LeftSpace SPACE on the left of a string
    LeftTilde ~ on the left of a string
    LtGt <, >,
    None No characters are encoded
    Percent %
    Pipe |
    Question ?
    RightCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string, , ,
    RightPeriod . on the right of a string.
    RightSpace SPACE on the right of a string
    SingleQuote'Semicolon;
    Slash/SingleQuote'
    Slash/
    SquareBracket [, ],
    @@ -7139,6 +7618,32 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    Some cloud storage systems support reading (R) the MIME type of objects and some support writing (W) the MIME type of objects.

    The MIME type can be important if you are serving files directly to HTTP from the storage system.

    If you are copying from a remote which supports reading (R) to a remote which supports writing (W) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.

    +

    Metadata

    +

    Backends may or may support reading or writing metadata. They may support reading and writing system metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).

    +

    The levels of metadata support are

    + + + + + + + + + + + + + + + + + + + + + +
    KeyExplanation
    RRead only System Metadata
    RWRead and write System Metadata
    RWURead and write System Metadata and read and write User Metadata
    +

    See the metadata docs for more info.

    Optional Features

    All rclone remotes support a base command set. Other features depend upon backend-specific capabilities.

    @@ -7172,6 +7677,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + @@ -7184,8 +7702,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - + + @@ -7197,7 +7715,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7210,7 +7728,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7223,7 +7741,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7236,7 +7754,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7249,7 +7767,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7262,7 +7780,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7275,7 +7793,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7288,7 +7806,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7301,7 +7819,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7314,7 +7832,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7327,6 +7845,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + @@ -7354,6 +7885,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + @@ -7366,6 +7910,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + @@ -7536,6 +8093,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total + + + + + + + + + + + + + @@ -7548,7 +8118,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7561,7 +8131,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7574,7 +8144,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7587,7 +8157,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7600,7 +8170,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7613,7 +8183,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + @@ -7686,6 +8256,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -7695,7 +8266,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) - --exclude-if-present string Exclude directories if filename is present + --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --files-from stringArray Read list of source-file names from file (use - to read from stdin) @@ -7734,6 +8305,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -7805,7 +8378,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0") -v, --verbose count Print lots more stuff (repeat for more)

    Backend Flags

    These flags are available for every command. They control the backends and may be set in the config file.

    @@ -7854,6 +8427,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) + --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL @@ -7893,6 +8467,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-upstreams SpaceSepList Upstreams for combining --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) @@ -7925,6 +8500,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) + --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs @@ -7980,6 +8556,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-host string FTP host to connect to @@ -7998,8 +8575,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret + --gcs-decompress If set this will decompress gzip encoded objects --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets + --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path @@ -8025,10 +8604,24 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --hdfs-namenode string Hadoop name node and port --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name + --hidrive-auth-url string Auth server URL + --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) + --hidrive-client-id string OAuth Client Id + --hidrive-client-secret string OAuth Client Secret + --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary + --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) + --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") + --hidrive-root-prefix string The root/parent folder for all paths (default "/") + --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") + --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user") + --hidrive-token string OAuth Access Token as a JSON blob + --hidrive-token-url string Token server url + --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) + --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / - --http-url string URL of http host to connect to + --http-url string URL of HTTP host to connect to --hubic-auth-url string Auth server URL --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --hubic-client-id string OAuth Client Id @@ -8037,6 +8630,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --hubic-no-chunk Don't chunk files during streaming upload --hubic-token string OAuth Access Token as a JSON blob --hubic-token-url string Token server url + --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) + --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) + --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") + --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") + --internetarchive-secret-access-key string IAS3 Secret Key (password) + --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -8058,7 +8658,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads - --local-nounc string Disable UNC (long path names) conversion on Windows + --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) @@ -8079,11 +8679,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) + --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret - --onedrive-disable-site-permission Disable the request for Sites.Read.All permission --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -8107,9 +8707,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") + --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url + --pcloud-username string Your pcloud username --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID @@ -8162,6 +8764,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) + --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist @@ -8172,6 +8775,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed + --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) + --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -8184,12 +8789,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --sftp-known-hosts-file string Optional path to known_hosts file --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) - --sftp-path-override string Override path used by SSH connection + --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host + --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes + --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat @@ -8246,6 +8853,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token @@ -8257,7 +8865,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using + --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret @@ -8539,7 +9147,7 @@ Optional Flags: -v, --verbose Increases logging verbosity. May be specified more than once for more details. -h, --help help for bisync -

    Arbitrary rclone flags may be specified on the bisync command line, for example rclone bsync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.

    +

    Arbitrary rclone flags may be specified on the bisync command line, for example rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.

    Paths

    Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path), Windows drive paths (with a drive letter and :) or configured remotes with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below).

    Path1 and Path2 are treated equally, in that neither has priority for file changes, and access efficiency does not change whether a remote is on Path1 or Path2.

    @@ -8715,7 +9323,7 @@ Optional Flags:

    rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover).

    Limitations

    Supported backends

    -

    Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP

    +

    Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk

    It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below.

    First release of rclone bisync requires that underlying backend supported the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync release.

    Concurrent modifications

    @@ -8883,7 +9491,7 @@ rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --cr

    Denied downloads of "infected" or "abusive" files

    Google Drive has a filter for certain file types (.exe, .apk, et cetera) that by default cannot be copied from Google Drive to the local filesystem. If you are having problems, run with --verbose to see specifically which files are generating complaints. If the error is This file has been identified as malware or spam and cannot be downloaded, consider using the flag --drive-acknowledge-abuse.

    Google Doc files

    -

    Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx extension, for example), it's not possible to import a normal file back into a Google document.

    +

    Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx extension, for example), it is not possible to import a normal file back into a Google document.

    Bisync's handling of Google Doc files is to flag them in the run log output for user's attention and ignore them for any file transfers, deletes, or syncs. They will show up with a length of -1 in the listings. This bisync run is otherwise successful:

    2021/05/11 08:23:15 INFO  : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:"
     2021/05/11 08:23:15 INFO  : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx"
    @@ -9223,7 +9831,7 @@ y/e/d> y
    Yes
    Akamai NetstorageYesNoNoNoNoYesYesNoNoYes
    Amazon Drive Yes NoNo Yes
    Amazon S3
    Amazon S3 (or S3 compatible) No Yes NoNo No
    Backblaze B2 No YesNo No
    Box Yes YesYes Yes
    Citrix ShareFile Yes YesNo Yes
    Dropbox Yes YesYes Yes
    Enterprise File Fabric Yes YesNo Yes
    FTP No NoNo Yes
    Google Cloud Storage Yes YesNo No
    Google Drive Yes YesYes Yes
    Google Photos No NoNo No
    HDFS Yes NoYes Yes
    HiDriveYesYesYesYesNoNoYesNoNoYes
    HTTP NoNo
    Internet ArchiveNoYesNoNoYesYesNoYesYesNo
    Jottacloud Yes YesYes Yes
    KoofrYesYesYesYesNoNoYesYesYesYes
    Mail.ru Cloud YesYes
    SiaNoNoNoNoNoNoYesNoNoYes
    SugarSync Yes YesNo Yes
    Storj Yes † NoNo No
    Uptobox No YesNo No
    WebDAV Yes YesYes Yes
    Yandex Disk Yes YesYes Yes
    Zoho WorkDrive Yes YesYes Yes
    The local filesystem Yes No

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Standard options

    -

    Here are the standard options specific to fichier (1Fichier).

    +

    Here are the Standard options specific to fichier (1Fichier).

    --fichier-api-key

    Your API Key, get it from https://1fichier.com/console/params.pl.

    Properties:

    @@ -9234,7 +9842,7 @@ y/e/d> y
  • Required: false
  • Advanced options

    -

    Here are the advanced options specific to fichier (1Fichier).

    +

    Here are the Advanced options specific to fichier (1Fichier).

    --fichier-shared-folder

    If you want to download a shared folder, add this parameter.

    Properties:

    @@ -9276,7 +9884,7 @@ y/e/d> y

    Limitations

    rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Alias

    The alias remote provides a new name for another remote.

    Paths may be as deep as required or a local path, e.g. remote:directory/subdirectory or /directory/subdirectory.

    @@ -9334,7 +9942,7 @@ e/n/d/r/c/s/q> q

    Copy another local directory to the alias directory called source

    rclone copy /home/source remote:source

    Standard options

    -

    Here are the standard options specific to alias (Alias for an existing remote).

    +

    Here are the Standard options specific to alias (Alias for an existing remote).

    --alias-remote

    Remote or path to alias.

    Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".

    @@ -9446,7 +10054,7 @@ y/e/d> y

    Using with non .com Amazon accounts

    Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

    Standard options

    -

    Here are the standard options specific to amazon cloud drive (Amazon Drive).

    +

    Here are the Standard options specific to amazon cloud drive (Amazon Drive).

    --acd-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -9468,7 +10076,7 @@ y/e/d> y
  • Required: false
  • Advanced options

    -

    Here are the advanced options specific to amazon cloud drive (Amazon Drive).

    +

    Here are the Advanced options specific to amazon cloud drive (Amazon Drive).

    --acd-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -9549,16 +10157,21 @@ y/e/d> y

    At the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    rclone about is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Amazon S3 Storage Providers

    The S3 backend can be used with a number of different providers:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for OSS API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    +

    Endpoint for OBS API.

    +

    Properties:

    + +

    --s3-endpoint

    Endpoint for Scaleway Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for StackPath Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint of the Shared Gateway.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for Tencent COS API.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for RackCorp Object Storage.

    Properties:

    -

    --s3-endpoint

    +

    --s3-endpoint

    Endpoint for S3 API.

    Required when using an S3 clone.

    Properties:

    --s3-location-constraint

    @@ -11190,6 +12154,162 @@ y/e/d>

    --s3-location-constraint

    +

    Location constraint - must match endpoint.

    +

    Used when creating buckets only.

    +

    Properties:

    + +

    --s3-location-constraint

    +

    Location constraint - must match endpoint.

    +

    Used when creating buckets only.

    +

    Properties:

    + +

    --s3-location-constraint

    Location constraint - must match endpoint when using IBM Cloud Public.

    For on-prem COS, do not make a selection from this list, hit enter.

    Properties:

    @@ -11331,7 +12451,7 @@ y/e/d> -

    --s3-location-constraint

    +

    --s3-location-constraint

    Location constraint - the location where your bucket will be located and your data stored.

    Properties:

    -

    --s3-location-constraint

    +

    --s3-location-constraint

    Location constraint - must be set to match the Region.

    Leave blank if not sure. Used when creating buckets only.

    Properties:

    @@ -11440,7 +12560,7 @@ y/e/d>

    --s3-storage-class

    +

    The storage class to use when storing new objects in ChinaMobile.

    +

    Properties:

    + +

    --s3-storage-class

    +

    The storage class to use when storing new objects in ArvanCloud.

    +

    Properties:

    + +

    --s3-storage-class

    The storage class to use when storing new objects in Tencent COS.

    Properties:

    -

    --s3-storage-class

    +

    --s3-storage-class

    The storage class to use when storing new objects in S3.

    Properties:

    Advanced options

    -

    Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).

    +

    Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).

    --s3-bucket-acl

    Canned ACL used when creating buckets.

    For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

    @@ -11742,7 +12908,7 @@ y/e/d>

    Limitations

    rclone about is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Box

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    @@ -13519,7 +15430,7 @@ y/e/d> y

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface.

    So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config.

    Standard options

    -

    Here are the standard options specific to box (Box).

    +

    Here are the Standard options specific to box (Box).

    --box-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -13581,7 +15492,7 @@ y/e/d> y

    Advanced options

    -

    Here are the advanced options specific to box (Box).

    +

    Here are the Advanced options specific to box (Box).

    --box-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -13671,7 +15582,7 @@ y/e/d> y

    Box file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent (U+FF3C Fullwidth Reverse Solidus).

    Box only supports filenames up to 255 characters in length.

    rclone about is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Cache (DEPRECATED)

    The cache remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount.

    Status

    @@ -13833,7 +15744,7 @@ chunk_total_size = 10G

    Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.

    Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)

    Standard options

    -

    Here are the standard options specific to cache (Cache a remote).

    +

    Here are the Standard options specific to cache (Cache a remote).

    --cache-remote

    Remote to cache.

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    @@ -13947,7 +15858,7 @@ chunk_total_size = 10G

    Advanced options

    -

    Here are the advanced options specific to cache (Cache a remote).

    +

    Here are the Advanced options specific to cache (Cache a remote).

    --cache-plex-token

    The plex token for authentication - auto set normally.

    Properties:

    @@ -14101,7 +16012,7 @@ chunk_total_size = 10G

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    -

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    See the backend command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    stats

    Print stats on the cache backend in JSON format.

    @@ -14180,7 +16091,7 @@ y/e/d> y

    For example, if name format is big_*-##.part and original file name is data.txt and numbering starts from 0, then the first chunk will be named big_data.txt-00.part, the 99th chunk will be big_data.txt-98.part and the 302nd chunk will become big_data.txt-301.part.

    Note that list assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files.

    When using norename transactions, chunk names will additionally have a unique file version suffix. For example, BIG_FILE_NAME.rclone_chunk.001_bp562k.

    -

    Metadata

    +

    Metadata

    Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the none format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases.

    Simple JSON metadata format

    This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields:

    @@ -14221,7 +16132,7 @@ y/e/d> y

    Chunker included in rclone releases up to v1.54 can sometimes fail to detect metadata produced by recent versions of rclone. We recommend users to keep rclone up-to-date to avoid data corruption.

    Changing transactions is dangerous and requires explicit migration.

    Standard options

    -

    Here are the standard options specific to chunker (Transparently chunk/split large files).

    +

    Here are the Standard options specific to chunker (Transparently chunk/split large files).

    --chunker-remote

    Remote to chunk/unchunk.

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    @@ -14285,7 +16196,7 @@ y/e/d> y

    Advanced options

    -

    Here are the advanced options specific to chunker (Transparently chunk/split large files).

    +

    Here are the Advanced options specific to chunker (Transparently chunk/split large files).

    --chunker-name-format

    String format of chunk file names.

    The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.

    @@ -14536,7 +16447,7 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Standard options

    -

    Here are the standard options specific to sharefile (Citrix Sharefile).

    +

    Here are the Standard options specific to sharefile (Citrix Sharefile).

    --sharefile-root-folder-id

    ID of the root folder.

    Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).

    @@ -14571,7 +16482,7 @@ y/e/d> y

    Advanced options

    -

    Here are the advanced options specific to sharefile (Citrix Sharefile).

    +

    Here are the Advanced options specific to sharefile (Citrix Sharefile).

    --sharefile-upload-cutoff

    Cutoff for switching to multipart upload.

    Properties:

    @@ -14617,7 +16528,7 @@ y/e/d> y

    Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    ShareFile only supports filenames up to 256 characters in length.

    rclone about is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Crypt

    Rclone crypt remotes encrypt and decrypt other remotes.

    A remote of type crypt does not access a storage system directly, but instead wraps another remote, which in turn accesses the storage system. This is similar to how alias, union, chunker and a few others work. It makes the usage very flexible, as you can add a layer, in this case an encryption layer, on top of any other backend, even in multiple layers. Rclone's functionality can be used as with any other remote, for example you can mount a crypt remote.

    @@ -14816,7 +16727,7 @@ $ rclone -q ls secret:

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    Use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.

    Standard options

    -

    Here are the standard options specific to crypt (Encrypt/Decrypt a remote).

    +

    Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).

    --crypt-remote

    Remote to encrypt/decrypt.

    Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).

    @@ -14896,7 +16807,7 @@ $ rclone -q ls secret:
  • Required: false
  • Advanced options

    -

    Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).

    +

    Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).

    --crypt-server-side-across-configs

    Allow server-side operations (e.g. copy) to work across different crypt configs.

    Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it.

    @@ -14965,12 +16876,15 @@ $ rclone -q ls secret: +

    Metadata

    +

    Any metadata supported by the underlying remote is read and written.

    +

    See the metadata docs for more info.

    Backend commands

    Here are the commands specific to the crypt backend.

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    -

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    See the backend command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    encode

    Encode the given filename(s)

    @@ -15050,7 +16964,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile

    Key derivation

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

    scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.

    -

    SEE ALSO

    +

    SEE ALSO

    @@ -15114,7 +17028,7 @@ y/e/d> y

    File names

    The compressed files will be named *.###########.gz where * is the base file and the # part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.

    Standard options

    -

    Here are the standard options specific to compress (Compress a remote).

    +

    Here are the Standard options specific to compress (Compress a remote).

    --compress-remote

    Remote to compress.

    Properties:

    @@ -15141,7 +17055,7 @@ y/e/d> y

    Advanced options

    -

    Here are the advanced options specific to compress (Compress a remote).

    +

    Here are the Advanced options specific to compress (Compress a remote).

    --compress-level

    GZIP compression level (-2 to 9).

    Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.

    @@ -15163,10 +17077,111 @@ y/e/d> y
  • Type: SizeSuffix
  • Default: 20Mi
  • +

    Metadata

    +

    Any metadata supported by the underlying remote is read and written.

    +

    See the metadata docs for more info.

    +

    Combine

    +

    The combine backend joins remotes together into a single directory tree.

    +

    For example you might have a remote for images on one provider:

    +
    $ rclone tree s3:imagesbucket
    +/
    +├── image1.jpg
    +└── image2.jpg
    +

    And a remote for files on another:

    +
    $ rclone tree drive:important/files
    +/
    +├── file1.txt
    +└── file2.txt
    +

    The combine backend can join these together into a synthetic directory structure like this:

    +
    $ rclone tree combined:
    +/
    +├── files
    +│   ├── file1.txt
    +│   └── file2.txt
    +└── images
    +    ├── image1.jpg
    +    └── image2.jpg
    +

    You'd do this by specifying an upstreams parameter in the config like this

    +
    upstreams = images=s3:imagesbucket files=drive:important/files
    +

    During the initial setup with rclone config you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

    +

    Configuration

    +

    Here is an example of how to make a combine called remote for the example above. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +...
    +XX / Combine several remotes into one
    +   \ (combine)
    +...
    +Storage> combine
    +Option upstreams.
    +Upstreams for combining
    +These should be in the form
    +    dir=remote:path dir2=remote2:path
    +Where before the = is specified the root directory and after is the remote to
    +put there.
    +Embedded spaces can be added using quotes
    +    "dir=remote:path with space" "dir2=remote2:path with space"
    +Enter a fs.SpaceSepList value.
    +upstreams> images=s3:imagesbucket files=drive:important/files
    +--------------------
    +[remote]
    +type = combine
    +upstreams = images=s3:imagesbucket files=drive:important/files
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Configuring for Google Drive Shared Drives

    +

    Rclone has a convenience feature for making a combine backend for all the shared drives you have access to.

    +

    Assuming your main (non shared drive) Google drive remote is called drive: you would run

    +
    rclone backend -o config drives drive:
    +

    This would produce something like this:

    +
    [My Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    +
    +[Test Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    +
    +[AllDrives]
    +type = combine
    +remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
    +

    If you then add that config to your config file (find it with rclone config file) then you can access all the shared drives in one place with the AllDrives: remote.

    +

    See the Google Drive docs for full info.

    +

    Standard options

    +

    Here are the Standard options specific to combine (Combine several remotes into one).

    +

    --combine-upstreams

    +

    Upstreams for combining

    +

    These should be in the form

    +
    dir=remote:path dir2=remote2:path
    +

    Where before the = is specified the root directory and after is the remote to put there.

    +

    Embedded spaces can be added using quotes

    +
    "dir=remote:path with space" "dir2=remote2:path with space"
    +

    Properties:

    + +

    Metadata

    +

    Any metadata supported by the underlying remote is read and written.

    +

    See the metadata docs for more info.

    Dropbox

    Paths are specified as remote:path

    Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -15287,8 +17302,8 @@ y/e/d> y

    This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.

    If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async then do a final transfer with --dropbox-batch-mode sync (the default).

    Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.

    -

    Standard options

    -

    Here are the standard options specific to dropbox (Dropbox).

    +

    Standard options

    +

    Here are the Standard options specific to dropbox (Dropbox).

    --dropbox-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -15310,7 +17325,7 @@ y/e/d> y
  • Required: false
  • Advanced options

    -

    Here are the advanced options specific to dropbox (Dropbox).

    +

    Here are the Advanced options specific to dropbox (Dropbox).

    --dropbox-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -15476,7 +17491,7 @@ y/e/d> y

    Enterprise File Fabric

    This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.

    -

    Configuration

    +

    Configuration

    The initial setup for the Enterprise File Fabric backend involves getting a token from the the Enterprise File Fabric which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -15570,8 +17585,8 @@ y/e/d> y 120673757,My contacts/ 120673761,S3 Storage/

    The ID for "S3 Storage" would be 120673761.

    -

    Standard options

    -

    Here are the standard options specific to filefabric (Enterprise File Fabric).

    +

    Standard options

    +

    Here are the Standard options specific to filefabric (Enterprise File Fabric).

    --filefabric-url

    URL of the Enterprise File Fabric to connect to.

    Properties:

    @@ -15620,7 +17635,7 @@ y/e/d> y
  • Required: false
  • Advanced options

    -

    Here are the advanced options specific to filefabric (Enterprise File Fabric).

    +

    Here are the Advanced options specific to filefabric (Enterprise File Fabric).

    --filefabric-token

    Session Token.

    This is a session token which rclone caches in the config file. It is usually valid for 1 hour.

    @@ -15666,7 +17681,7 @@ y/e/d> y

    FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.

    Limitations of Rclone's FTP backend

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

    -

    Configuration

    +

    Configuration

    To create an FTP configuration named remote, run

    rclone config

    Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, use anonymous as username and your email address as password.

    @@ -15682,7 +17697,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -XX / FTP Connection +XX / FTP \ "ftp" [snip] Storage> ftp @@ -15776,8 +17791,8 @@ y/e/d> y

    This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.

    -

    Standard options

    -

    Here are the standard options specific to ftp (FTP Connection).

    +

    Standard options

    +

    Here are the Standard options specific to ftp (FTP).

    --ftp-host

    FTP host to connect to.

    E.g. "ftp.example.com".

    @@ -15837,7 +17852,7 @@ y/e/d> y
  • Default: false
  • Advanced options

    -

    Here are the advanced options specific to ftp (FTP Connection).

    +

    Here are the Advanced options specific to ftp (FTP).

    --ftp-concurrency

    Maximum number of FTP simultaneous connections, 0 for unlimited.

    Properties:

    @@ -15874,6 +17889,15 @@ y/e/d> y
  • Type: bool
  • Default: false
  • +

    --ftp-disable-utf8

    +

    Disable using UTF-8 even if server advertises support.

    +

    Properties:

    +

    --ftp-writing-mdtm

    Use MDTM to set modification time (VsFtpd quirk)

    Properties:

    @@ -15970,7 +17994,7 @@ y/e/d> y

    FTP servers acting as rclone remotes must support passive mode. The mode cannot be configured as passive is the only supported one. Rclone's FTP implementation is not compatible with active mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.

    Rclone's FTP backend does not support any checksums but can compare file sizes.

    rclone about is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    The implementation of : --dump headers, --dump bodies, --dump auth for debugging isn't the same as for rclone HTTP based backends - it has less fine grained control.

    --timeout isn't supported (but --contimeout is).

    --bind isn't supported.

    @@ -15982,7 +18006,7 @@ y/e/d> y

    You can use the following command to check whether rclone can use precise time with your FTP server: rclone backend features your_ftp_remote: (the trailing colon is important). Look for the number in the line tagged by Precision designating the remote time precision expressed as nanoseconds. A value of 1000000000 means that file time precision of 1 second is available. A value of 3153600000000000000 (or another large number) means "unsupported".

    Google Cloud Storage

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -16177,8 +18201,8 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    +

    Standard options

    +

    Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    --gcs-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -16532,7 +18556,7 @@ y/e/d> y

    Advanced options

    -

    Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    +

    Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    --gcs-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -16562,6 +18586,27 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --gcs-no-check-bucket

    +

    If set, don't attempt to check the bucket exists or create it.

    +

    This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.

    +

    Properties:

    + +

    --gcs-decompress

    +

    If set this will decompress gzip encoded objects.

    +

    It is possible to upload objects to GCS with "Content-Encoding: gzip" set. Normally rclone will download these files files as compressed objects.

    +

    If this flag is set then rclone will decompress these files with "Content-Encoding: gzip" as they are received. This means that rclone can't check the size and hash but the file contents will be decompressed.

    +

    Properties:

    +

    --gcs-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -16574,11 +18619,11 @@ y/e/d> y

    Limitations

    rclone about is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Google Drive

    Paths are specified as drive:path

    Drive paths may be as deep as required, e.g. drive:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -16619,8 +18664,6 @@ Choose a number from below, or type in your own value 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 -ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). -root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config @@ -16677,7 +18720,7 @@ y/e/d> y

    drive.metadata.readonly

    This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.

    Root folder ID

    -

    You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.

    +

    This option has been moved to the advanced section. You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.

    Normally you will leave this blank and rclone will determine the correct root to use itself.

    However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.

    @@ -16919,10 +18962,20 @@ trashed=false and 'c' in parents +bmp +image/bmp +Windows Bitmap format + + csv text/csv Standard CSV format for Spreadsheets + +doc +application/msword +Classic Word file + docx application/vnd.openxmlformats-officedocument.wordprocessingml.document @@ -16946,7 +18999,7 @@ trashed=false and 'c' in parents json application/vnd.google-apps.script+json -JSON Text Format +JSON Text Format for Google Apps scripts odp @@ -16974,41 +19027,56 @@ trashed=false and 'c' in parents Adobe PDF Format +pjpeg +image/pjpeg +Progressive JPEG Image + + png image/png PNG Image Format - + pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint - + rtf application/rtf Rich Text Format - + svg image/svg+xml Scalable Vector Graphics Format - + tsv text/tab-separated-values Standard TSV format for spreadsheets - + txt text/plain Plain Text + +wmf +application/x-msmetafile +Windows Meta File + +xls +application/vnd.ms-excel +Classic Excel file + + xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet - + zip application/zip A ZIP file of HTML, Images CSS @@ -17047,8 +19115,8 @@ trashed=false and 'c' in parents -

    Standard options

    -

    Here are the standard options specific to drive (Google Drive).

    +

    Standard options

    +

    Here are the Standard options specific to drive (Google Drive).

    --drive-client-id

    Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.

    Properties:

    @@ -17104,16 +19172,6 @@ trashed=false and 'c' in parents -

    --drive-root-folder-id

    -

    ID of the root folder. Leave blank normally.

    -

    Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

    -

    Properties:

    -

    --drive-service-account-file

    Service Account Credentials JSON file path.

    Leave blank normally. Needed only if you want use SA instead of interactive login.

    @@ -17135,7 +19193,7 @@ trashed=false and 'c' in parents
  • Default: false
  • Advanced options

    -

    Here are the advanced options specific to drive (Google Drive).

    +

    Here are the Advanced options specific to drive (Google Drive).

    --drive-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -17165,6 +19223,16 @@ trashed=false and 'c' in parents
  • Type: string
  • Required: false
  • +

    --drive-root-folder-id

    +

    ID of the root folder. Leave blank normally.

    +

    Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

    +

    Properties:

    +

    --drive-service-account-credentials

    Service Account Credentials JSON blob.

    Leave blank normally. Needed only if you want use SA instead of interactive login.

    @@ -17490,6 +19558,21 @@ trashed=false and 'c' in parents
  • Type: bool
  • Default: false
  • +

    --drive-resource-key

    +

    Resource key for accessing a link-shared file.

    +

    If you need to access files shared with a link like this

    +
    https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing
    +

    Then you will need to use the first part "XXX" as the "root_folder_id" and the second part "YYY" as the "resource_key" otherwise you will get 404 not found errors when trying to access the directory.

    +

    See: https://developers.google.com/drive/api/guides/resource-keys

    +

    This resource key requirement only applies to a subset of old files.

    +

    Note also that opening the folder once in the web interface (with the user you've authenticated rclone with) seems to be enough so that the resource key is no needed.

    +

    Properties:

    +

    --drive-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -17505,7 +19588,7 @@ trashed=false and 'c' in parents

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    -

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    See the backend command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    get

    Get command for fetching the drive config parameters

    @@ -17563,15 +19646,19 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu "name": "Test Drive" } ] -

    With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found.

    +

    With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.

    [My Drive]
     type = alias
     remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
     
     [Test Drive]
     type = alias
    -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    -

    Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. This may require manual editing of the names.

    +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + +[AllDrives] +type = combine +remote = "My Drive=My Drive:" "Test Drive=Test Drive:" +

    Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal charactes will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.

    untrash

    Untrash files and directories

    rclone backend untrash remote: [options] [<arguments>+]
    @@ -17597,11 +19684,17 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.

    If the destination is a drive backend then server-side copying will be attempted if possible.

    Use the -i flag to see what would be copied before copying.

    +

    exportformats

    +

    Dump the export formats for debug purposes

    +
    rclone backend exportformats remote: [options] [<arguments>+]
    +

    importformats

    +

    Dump the import formats for debug purposes

    +
    rclone backend importformats remote: [options] [<arguments>+]

    Limitations

    Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.

    Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy to download and upload the files if you prefer.

    Limitations of Google Docs

    -

    Google docs will appear as size -1 in rclone ls and as size 0 in anything which uses the VFS layer, e.g. rclone mount, rclone serve.

    +

    Google docs will appear as size -1 in rclone ls, rclone ncdu etc, and as size 0 in anything which uses the VFS layer, e.g. rclone mount and rclone serve. When calculating directory totals, e.g. in rclone size and rclone ncdu, they will be counted in as empty files.

    This is because rclone can't find out the size of the Google docs without downloading them.

    Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.

    However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!

    @@ -17623,16 +19716,15 @@ rclone backend copyid drive: ID1 path1 ID2 path2
  • Select a project or create a new project.

  • Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".

  • Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"

  • -
  • If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.

  • - -

    (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).

    -
      +
    1. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.

      +

      (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this will restrict API use to Google Workspace users in your organisation).

    2. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".

    3. Choose an application type of "Desktop app" and click "Create". (the default name is fine)

    4. -
    5. It will show you a client ID and client secret. Make a note of these.

    6. +
    7. It will show you a client ID and client secret. Make a note of these.

      +

      (If you selected "External" at Step 5 continue to "Publish App" in the Steps 9 and 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11.)

    8. Go to "Oauth consent screen" and press "Publish App"

    9. -
    10. Provide the noted client ID and client secret to rclone.

    11. Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users".

    12. +
    13. Provide the noted client ID and client secret to rclone.

    Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).

    (Thanks to @balazer on github for these instructions.)

    @@ -17640,7 +19732,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    Google Photos

    The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

    NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

    -

    Configuration

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -17793,8 +19885,8 @@ y/e/d> y

    This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

    The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

    -

    Standard options

    -

    Here are the standard options specific to google photos (Google Photos).

    +

    Standard options

    +

    Here are the Standard options specific to google photos (Google Photos).

    --gphotos-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -17826,7 +19918,7 @@ y/e/d> y
  • Default: false
  • Advanced options

    -

    Here are the advanced options specific to google photos (Google Photos).

    +

    Here are the Advanced options specific to google photos (Google Photos).

    --gphotos-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -18007,8 +20099,8 @@ rclone backend drop Hasher:
    rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1

    stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

    Configuration reference

    -

    Standard options

    -

    Here are the standard options specific to hasher (Better checksums for other remotes).

    +

    Standard options

    +

    Here are the Standard options specific to hasher (Better checksums for other remotes).

    --hasher-remote

    Remote to cache checksums for (e.g. myRemote:path).

    Properties:

    @@ -18037,7 +20129,7 @@ rclone backend drop Hasher:
  • Default: off
  • Advanced options

    -

    Here are the advanced options specific to hasher (Better checksums for other remotes).

    +

    Here are the Advanced options specific to hasher (Better checksums for other remotes).

    --hasher-auto-size

    Auto-update checksum for files smaller than this size (disabled by default).

    Properties:

    @@ -18047,12 +20139,15 @@ rclone backend drop Hasher:
  • Type: SizeSuffix
  • Default: 0
  • +

    Metadata

    +

    Any metadata supported by the underlying remote is read and written.

    +

    See the metadata docs for more info.

    Backend commands

    Here are the commands specific to the hasher backend.

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    -

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    See the backend command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    drop

    Drop cache

    @@ -18101,7 +20196,7 @@ rclone backend drop Hasher:

    HDFS

    HDFS is a distributed file-system, part of the Apache Hadoop framework.

    Paths are specified as remote: or remote:path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -18209,8 +20304,8 @@ username = root

    Invalid UTF-8 bytes will also be replaced.

    -

    Standard options

    -

    Here are the standard options specific to hdfs (Hadoop distributed file system).

    +

    Standard options

    +

    Here are the Standard options specific to hdfs (Hadoop distributed file system).

    --hdfs-namenode

    Hadoop name node and port.

    E.g. "namenode:8020" to connect to host namenode at port 8020.

    @@ -18238,7 +20333,7 @@ username = root

    Advanced options

    -

    Here are the advanced options specific to hdfs (Hadoop distributed file system).

    +

    Here are the Advanced options specific to hdfs (Hadoop distributed file system).

    --hdfs-service-principal-name

    Kerberos service principal name for the namenode.

    Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.

    @@ -18281,13 +20376,309 @@ username = root
  • No server-side Move or DirMove.
  • Checksums not implemented.
  • +

    HiDrive

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config walks you through it.

    +

    Configuration

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / HiDrive
    +   \ "hidrive"
    +[snip]
    +Storage> hidrive
    +OAuth Client Id - Leave blank normally.
    +client_id>
    +OAuth Client Secret - Leave blank normally.
    +client_secret>
    +Access permissions that rclone should use when requesting access from HiDrive.
    +Leave blank normally.
    +scope_access>
    +Edit advanced config?
    +y/n> n
    +Use auto config?
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +type = hidrive
    +token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"}
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    You should be aware that OAuth-tokens can be used to access your account and hence should not be shared with other persons. See the below section for more information.

    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens your browser to the moment you get back the verification code. The webserver runs on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your HiDrive root folder

    +
    rclone lsd remote:
    +

    List all the files in your HiDrive filesystem

    +
    rclone ls remote:
    +

    To copy a local directory to a HiDrive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Keeping your tokens safe

    +

    Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. Therefore you should make sure no one else can access your configuration.

    +

    It is possible to encrypt rclone's configuration file. You can find information on securing your configuration file by viewing the configuration encryption docs.

    +

    Invalid refresh token

    +

    As can be verified here, each refresh_token (for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended.

    +

    This means that if you

    + +

    then rclone will return an error which includes a text that implies the refresh token is invalid or expired.

    +

    To fix this you will need to authorize rclone to access your HiDrive account again.

    +

    Using

    +
    rclone config reconnect remote:
    +

    the process is very similar to the process of initial setup exemplified before.

    +

    Modified time and hashes

    +

    HiDrive allows modification times to be set on objects accurate to 1 second.

    +

    HiDrive supports its own hash type which is used to verify the integrety of file contents after successful transfers.

    +

    Restricted filename characters

    +

    HiDrive cannot store files or folders that include / (0x2F) or null-bytes (0x00) in their name. Any other characters can be used in the names of files or folders. Additionally, files or folders cannot be named either of the following: . or ..

    +

    Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names.

    +

    You can read about how this filename encoding works in general here.

    +

    Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less.

    +

    Transfers

    +

    HiDrive limits file sizes per single request to a maximum of 2 GiB. To allow storage of larger files and allow for better upload performance, the hidrive backend will use a chunked transfer for files larger than 96 MiB. Rclone will upload multiple parts/chunks of the file at the same time. Chunks in the process of being uploaded are buffered in memory, so you may want to restrict this behaviour on systems with limited resources.

    +

    You can customize this behaviour using the following options:

    + +

    See the below section about configuration options for more details.

    +

    Root folder

    +

    You can set the root folder for rclone. This is the directory that rclone considers to be the root of your HiDrive.

    +

    Usually, you will leave this blank, and rclone will use the root of the account.

    +

    However, you can set this to restrict rclone to a specific folder hierarchy.

    +

    This works by prepending the contents of the root_prefix option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent:

    +
    rclone lsd --hidrive-root-prefix="/users/test/" remote:path
    +
    +rclone lsd remote:/users/test/path
    +

    See the below section about configuration options for more details.

    +

    Directory member count

    +

    By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd uses this information.

    +

    The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count option can be used.

    +

    See the below section about configuration options for more details.

    +

    Standard options

    +

    Here are the Standard options specific to hidrive (HiDrive).

    +

    --hidrive-client-id

    +

    OAuth Client Id.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --hidrive-client-secret

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --hidrive-scope-access

    +

    Access permissions that rclone should use when requesting access from HiDrive.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to hidrive (HiDrive).

    +

    --hidrive-token

    +

    OAuth Access Token as a JSON blob.

    +

    Properties:

    + +

    --hidrive-auth-url

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --hidrive-token-url

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --hidrive-scope-role

    +

    User-level that rclone should use when requesting access from HiDrive.

    +

    Properties:

    + +

    --hidrive-root-prefix

    +

    The root/parent folder for all paths.

    +

    Fill in to use the specified folder as the parent for all paths given to the remote. This way rclone can use any folder as its starting point.

    +

    Properties:

    + +

    --hidrive-endpoint

    +

    Endpoint for the service.

    +

    This is the URL that API-calls will be made to.

    +

    Properties:

    + +

    --hidrive-disable-fetching-member-count

    +

    Do not fetch number of objects in directories unless it is absolutely necessary.

    +

    Requests may be faster if the number of objects in subdirectories is not fetched.

    +

    Properties:

    + +

    --hidrive-chunk-size

    +

    Chunksize for chunked uploads.

    +

    Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size.

    +

    The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit or to a negative value will cause uploads to fail.

    +

    Setting this to larger values may increase the upload speed at the cost of using more memory. It can be set to smaller values smaller to save on memory.

    +

    Properties:

    + +

    --hidrive-upload-cutoff

    +

    Cutoff/Threshold for chunked uploads.

    +

    Any files larger than this will be uploaded in chunks of the configured chunksize.

    +

    The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit will cause uploads to fail.

    +

    Properties:

    + +

    --hidrive-upload-concurrency

    +

    Concurrency for chunked uploads.

    +

    This is the upper limit for how many transfers for the same file are running concurrently. Setting this above to a value smaller than 1 will cause uploads to deadlock.

    +

    If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

    +

    Properties:

    + +

    --hidrive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    Limitations

    + +

    HiDrive is able to store symbolic links (symlinks) by design, for example, when unpacked from a zip archive.

    +

    There exists no direct mechanism to manage native symlinks in remotes. As such this implementation has chosen to ignore any native symlinks present in the remote. rclone will not be able to access or show any symlinks stored in the hidrive-remote. This means symlinks cannot be individually removed, copied, or moved, except when removing, copying, or moving the parent folder.

    +

    This does not affect the .rclonelink-files that rclone uses to encode and store symbolic links.

    +

    Sparse files

    +

    It is possible to store sparse files in HiDrive.

    +

    Note that copying a sparse file will expand the holes into null-byte (0x00) regions that will then consume disk space. Likewise, when downloading a sparse file, the resulting file will have null-byte regions in the place of file holes.

    HTTP

    The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

    Paths are specified as remote: or remote:path.

    The remote: represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch and path fix, the resolved URL will be https://beta.rclone.org/branch/fix, while with path /fix the resolved URL will be https://beta.rclone.org/fix as the absolute path is resolved from the root of the domain.

    If the path following the remote: ends with / it will be assumed to point to a directory. If the path does not end with /, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv to see details). When --http-no-head is specified, a path without ending / is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /. When you know the path is a directory, ending it with / is always better as it avoids the initial HEAD request.

    To just download a single file it is easier to use copyurl.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -18300,7 +20691,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / http Connection +XX / HTTP \ "http" [snip] Storage> http @@ -18350,10 +20741,10 @@ e/n/d/r/c/s/q> q
    rclone lsd --http-url https://beta.rclone.org :http:

    or:

    rclone lsd :http,url='https://beta.rclone.org':
    -

    Standard options

    -

    Here are the standard options specific to http (http Connection).

    +

    Standard options

    +

    Here are the Standard options specific to http (HTTP).

    --http-url

    -

    URL of http host to connect to.

    +

    URL of HTTP host to connect to.

    E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.

    Properties:

    -

    Advanced options

    -

    Here are the advanced options specific to http (http Connection).

    +

    Advanced options

    +

    Here are the Advanced options specific to http (HTTP).

    --http-headers

    Set HTTP headers for all transactions.

    Use this to set additional HTTP headers for all transactions.

    @@ -18405,13 +20796,13 @@ e/n/d/r/c/s/q> q
  • Type: bool
  • Default: false
  • -

    Limitations

    +

    Limitations

    rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Hubic

    Paths are specified as remote:path

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    -

    Configuration

    +

    Configuration

    The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -18469,8 +20860,8 @@ y/e/d> y

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    Note that Hubic wraps the Swift backend, so most of the properties of are the same.

    -

    Standard options

    -

    Here are the standard options specific to hubic (Hubic).

    +

    Standard options

    +

    Here are the Standard options specific to hubic (Hubic).

    --hubic-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -18491,8 +20882,8 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to hubic (Hubic).

    +

    Advanced options

    +

    Here are the Advanced options specific to hubic (Hubic).

    --hubic-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -18554,9 +20945,288 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,InvalidUtf8
  • -

    Limitations

    +

    Limitations

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    Internet Archive

    +

    The Internet Archive backend utilizes Items on archive.org

    +

    Refer to IAS3 API documentation for the API this backend uses.

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.

    +

    Once you have made a remote (see the provider specific section above) you can use it like this:

    +

    Unlike S3, listing up all items uploaded by you isn't supported.

    +

    Make a new item

    +
    rclone mkdir remote:item
    +

    List the contents of a item

    +
    rclone ls remote:item
    +

    Sync /home/local/directory to the remote item, deleting any excess files in the item.

    +
    rclone sync -i /home/local/directory remote:item
    +

    Notes

    +

    Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.

    +

    You can optionally wait for the server's processing to finish, by setting non-zero value to wait_archive key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. 30m0s for smaller files) as it can take a long time depending on server's queue.

    +

    About metadata

    +

    This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.

    +

    The following are reserved by Internet Archive: - name - source - size - md5 - crc32 - sha1 - format - old_version - viruscheck

    +

    Trying to set values to these keys is ignored with a warning. Only setting mtime is an exception. Doing so make it the identical behavior as setting ModTime.

    +

    rclone reserves all the keys starting with rclone-. Setting value for these keys will give you warnings, but values are set according to request.

    +

    If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy.

    +

    Reading metadata will also provide custom (non-standard nor reserved) ones.

    +

    Configuration

    +

    Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.

    +

    First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +XX / InternetArchive Items
    +   \ (internetarchive)
    +Storage> internetarchive
    +Option access_key_id.
    +IAS3 Access Key.
    +Leave blank for anonymous access.
    +You can find one here: https://archive.org/account/s3.php
    +Enter a value. Press Enter to leave empty.
    +access_key_id> XXXX
    +Option secret_access_key.
    +IAS3 Secret Key (password).
    +Leave blank for anonymous access.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> XXXX
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> y
    +Option endpoint.
    +IAS3 Endpoint.
    +Leave blank for default value.
    +Enter a string value. Press Enter for the default (https://s3.us.archive.org).
    +endpoint> 
    +Option front_endpoint.
    +Host of InternetArchive Frontend.
    +Leave blank for default value.
    +Enter a string value. Press Enter for the default (https://archive.org).
    +front_endpoint> 
    +Option disable_checksum.
    +Don't store MD5 checksum with object metadata.
    +Normally rclone will calculate the MD5 checksum of the input before
    +uploading it so it can ask the server to check the object against checksum.
    +This is great for data integrity checking but can cause long delays for
    +large files to start uploading.
    +Enter a boolean value (true or false). Press Enter for the default (true).
    +disable_checksum> true
    +Option encoding.
    +The encoding for the backend.
    +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    +Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot).
    +encoding> 
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +--------------------
    +[remote]
    +type = internetarchive
    +access_key_id = XXXX
    +secret_access_key = XXXX
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Standard options

    +

    Here are the Standard options specific to internetarchive (Internet Archive).

    +

    --internetarchive-access-key-id

    +

    IAS3 Access Key.

    +

    Leave blank for anonymous access. You can find one here: https://archive.org/account/s3.php

    +

    Properties:

    + +

    --internetarchive-secret-access-key

    +

    IAS3 Secret Key (password).

    +

    Leave blank for anonymous access.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to internetarchive (Internet Archive).

    +

    --internetarchive-endpoint

    +

    IAS3 Endpoint.

    +

    Leave blank for default value.

    +

    Properties:

    + +

    --internetarchive-front-endpoint

    +

    Host of InternetArchive Frontend.

    +

    Leave blank for default value.

    +

    Properties:

    + +

    --internetarchive-disable-checksum

    +

    Don't ask the server to test against MD5 checksum calculated by rclone. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading.

    +

    Properties:

    + +

    --internetarchive-wait-archive

    +

    Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish. Only enable if you need to be guaranteed to be reflected after write operations. 0 to disable waiting. No errors to be thrown in case of timeout.

    +

    Properties:

    + +

    --internetarchive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    Metadata

    +

    Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key.

    +

    Owner is able to add custom keys. Metadata feature grabs all the keys including them.

    +

    Here are the possible system metadata items for the internetarchive backend.

    + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameHelpTypeExampleRead Only
    crc32CRC32 calculated by Internet Archivestring01234567N
    formatName of format identified by Internet ArchivestringComma-Separated ValuesN
    md5MD5 hash calculated by Internet Archivestring01234567012345670123456701234567N
    mtimeTime of last modification, managed by RcloneRFC 33392006-01-02T15:04:05.999999999ZN
    nameFull file path, without the bucket partfilenamebackend/internetarchive/internetarchive.goN
    old_versionWhether the file was replaced and moved by keep-old-version flagbooleantrueN
    rclone-ia-mtimeTime of last modification, managed by Internet ArchiveRFC 33392006-01-02T15:04:05.999999999ZN
    rclone-mtimeTime of last modification, managed by RcloneRFC 33392006-01-02T15:04:05.999999999ZN
    rclone-update-trackRandom value used by Rclone for tracking changes inside Internet ArchivestringaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaN
    sha1SHA1 hash calculated by Internet Archivestring0123456701234567012345670123456701234567N
    sizeFile size in bytesdecimal number123456N
    sourceThe source of the filestringoriginalN
    viruscheckThe last time viruscheck process was run for the file (?)unixtime1654191352N
    +

    See the metadata docs for more info.

    Jottacloud

    Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)

    Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.

    @@ -18572,7 +21242,7 @@ y/e/d> y

    Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.

    Tele2 Cloud authentication

    As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config

    This will guide you through an interactive setup process:

    @@ -18582,56 +21252,78 @@ s) Set configuration password q) Quit config n/s/q> n name> remote +Option Storage. Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value +Choose a number from below, or type in your own value. [snip] XX / Jottacloud - \ "jottacloud" + \ (jottacloud) [snip] Storage> jottacloud -** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use legacy authentication?. -This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. +Edit advanced config? y) Yes n) No (default) y/n> n - -Generate a personal login token here: https://www.jottacloud.com/web/secure +Option config_type. +Select authentication type. +Choose a number from below, or type in an existing string value. +Press Enter for the default (standard). + / Standard authentication. + 1 | Use this if you're a normal Jottacloud user. + \ (standard) + / Legacy authentication. + 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + \ (legacy) + / Telia Cloud authentication. + 3 | Use this if you are using Telia Cloud. + \ (telia) + / Tele2 Cloud authentication. + 4 | Use this if you are using Tele2 Cloud. + \ (tele2) +config_type> 1 +Personal login token. +Generate here: https://www.jottacloud.com/web/secure Login Token> <your token here> - -Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? - +Use a non-standard device/mountpoint? +Choosing no, the default, will let you access the storage used for the archive +section of the official Jottacloud client. If you instead want to access the +sync or the backup section, for example, you must choose yes. y) Yes -n) No +n) No (default) y/n> y -Please select the device to use. Normally this will be Jotta -Choose a number from below, or type in an existing value +Option config_device. +The device to use. In standard setup the built-in Jotta device is used, +which contains predefined mountpoints for archive, sync etc. All other devices +are treated as backup devices by the official Jottacloud client. You may create +a new by entering a unique name. +Choose a number from below, or type in your own string value. +Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta -Devices> 2 -Please select the mountpoint to user. Normally this will be Archive -Choose a number from below, or type in an existing value +config_device> 2 +Option config_mountpoint. +The mountpoint to use for the built-in device Jotta. +The standard setup is to use the Archive mountpoint. Most other mountpoints +have very limited support in rclone and should generally be avoided. +Choose a number from below, or type in an existing string value. +Press Enter for the default (Archive). 1 > Archive - 2 > Links + 2 > Shared 3 > Sync - -Mountpoints> 1 +config_mountpoint> 1 -------------------- -[jotta] +[remote] type = jottacloud +configVersion = 1 +client_id = jottacli +client_secret = +tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} +username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive -configVersion = 1 -------------------- -y) Yes this is OK +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y @@ -18643,18 +21335,19 @@ y/e/d> y

    To copy a local directory to an Jottacloud directory called backup

    rclone copy /home/source remote:backup

    Devices and Mountpoints

    -

    The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints.

    -

    With rclone you'll want to use the Jotta/Archive device/mountpoint in most cases, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. Note that uploading files is currently not supported to other devices than Jotta.

    -

    The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.

    +

    The official Jottacloud client registers a device for each computer you install it on, and shows them in the backup section of the user interface. For each folder you select for backup it will create a mountpoint within this device. A built-in device called Jotta is special, and contains mountpoints Archive, Sync and some others, used for corresponding features in official clients.

    +

    With rclone you'll want to use the standard Jotta/Archive device/mountpoint in most cases. However, you may for example want to access files from the sync or backup functionality provided by the official clients, and rclone therefore provides the option to select other devices and mountpoints during config.

    +

    You are allowed to create new devices and mountpoints. All devices except the built-in Jotta device are treated as backup devices by official Jottacloud clients, and the mountpoints on them are individual backup sets.

    +

    With the built-in Jotta device, only existing, built-in, mountpoints can be selected. In addition to the mentioned Archive and Sync, it may contain several other mountpoints such as: Latest, Links, Shared and Trash. All of these are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.

    Note also that with rclone version 1.58 and newer information about MIME types are not available when using --fast-list.

    -

    Modified time and hashes

    +

    Modified time and hashes

    Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

    Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by --temp-dir) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -18710,8 +21403,8 @@ y/e/d> y

    Versioning can be disabled by --jottacloud-no-versions option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

    -

    Advanced options

    -

    Here are the advanced options specific to jottacloud (Jottacloud).

    +

    Advanced options

    +

    Here are the Advanced options specific to jottacloud (Jottacloud).

    --jottacloud-md5-memory-limit

    Files bigger than this will be cached on disk to calculate the MD5 if required.

    Properties:

    @@ -18768,7 +21461,7 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.

    Jottacloud only supports filenames up to 255 characters in length.

    @@ -18777,7 +21470,7 @@ y/e/d> y

    Koofr

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

    Here is an example of how to make a remote called koofr. First run:

     rclone config
    @@ -18845,7 +21538,7 @@ y/e/d> y
    rclone ls koofr:

    To copy a local directory to an Koofr directory called backup

    rclone copy /home/source koofr:backup
    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -18864,8 +21557,8 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    -

    Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    +

    Standard options

    +

    Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-provider

    Choose your storage provider.

    Properties:

    @@ -18942,8 +21635,8 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    -

    Here are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    +

    Advanced options

    +

    Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-mountid

    Mount ID of the mount to use.

    If omitted, the primary mount is used.

    @@ -18974,7 +21667,7 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Providers

    Koofr

    @@ -19116,7 +21809,7 @@ y/e/d> y
  • Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
  • If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
  • -

    Configuration

    +

    Configuration

    Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run

    rclone config

    This will guide you through an interactive setup process:

    @@ -19190,7 +21883,7 @@ y/e/d> y

    Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote: command, which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -19244,8 +21937,8 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to mailru (Mail.ru Cloud).

    +

    Standard options

    +

    Here are the Standard options specific to mailru (Mail.ru Cloud).

    --mailru-user

    User name (usually email).

    Properties:

    @@ -19286,8 +21979,8 @@ y/e/d> y -

    Advanced options

    -

    Here are the advanced options specific to mailru (Mail.ru Cloud).

    +

    Advanced options

    +

    Here are the Advanced options specific to mailru (Mail.ru Cloud).

    --mailru-speedup-file-patterns

    Comma separated list of file name patterns eligible for speedup (put by hash).

    Patterns are case insensitive and can contain '*' or '?' meta characters.

    @@ -19416,7 +22109,7 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

    Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Mega

    @@ -19424,7 +22117,7 @@ y/e/d> y

    This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -19471,9 +22164,9 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an Mega directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    Mega does not support modification times or hashes yet.

    -

    Restricted filename characters

    +

    Restricted filename characters

    @@ -19501,6 +22194,20 @@ y/e/d> y

    Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

    Use rclone dedupe to fix duplicated files.

    Failure to log-in

    +

    Object not found

    +

    If you are connecting to your Mega remote for the first time, to test access and syncronisation, you may receive an error such as

    +
    Failed to create file system for "my-mega-remote:": 
    +couldn't login: Object (typically, node or user) not found
    +

    The diagnostic steps often recommended in the rclone forum start with the MEGAcmd utility. Note that this refers to the official C++ command from https://github.com/meganz/MEGAcmd and not the go language built command from t3rm1n4l/megacmd that is no longer maintained.

    +

    Follow the instructions for installing MEGAcmd and try accessing your remote as they recommend. You can establish whether or not you can log in using MEGAcmd, and obtain diagnostic information to help you, and search or work with others in the forum.

    +
    MEGA CMD> login me@example.com
    +Password:
    +Fetching nodes ...
    +Loading transfers from local cache
    +Login complete as me@example.com
    +me@example.com:/$ 
    +

    Note that some have found issues with passwords containing special characters. If you can not log on with rclone, but MEGAcmd logs on just fine, then consider changing your password temporarily to pure alphanumeric characters, in case that helps.

    +

    Repeated commands blocks access

    Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands.

    For example, executing this command 90 times in a row rclone link remote:file will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again.

    You can mitigate this issue by mounting the remote it with rclone mount. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd and then use rclone rc to run the commands over the API to avoid logging in each time.

    @@ -19511,8 +22218,8 @@ y/e/d> y

    Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.

    Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

    So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

    -

    Standard options

    -

    Here are the standard options specific to mega (Mega).

    +

    Standard options

    +

    Here are the Standard options specific to mega (Mega).

    --mega-user

    User name.

    Properties:

    @@ -19532,8 +22239,8 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    -

    Here are the advanced options specific to mega (Mega).

    +

    Advanced options

    +

    Here are the Advanced options specific to mega (Mega).

    --mega-debug

    Output more debug from Mega.

    If this flag is set (along with -vv) it will print further debugging information from the mega backend.

    @@ -19564,13 +22271,13 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

    Mega allows duplicate files which may confuse rclone.

    Memory

    The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.

    The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.

    -

    Configuration

    +

    Configuration

    You can configure it as a remote like this with rclone config too if you want to:

    No remotes found, make a new one?
     n) New remote
    @@ -19602,14 +22309,15 @@ y/e/d> y
    rclone mount :memory: /mnt/tmp
     rclone serve webdav :memory:
     rclone serve sftp :memory:
    -

    Modified time and hashes

    +

    Modified time and hashes

    The memory backend supports MD5 hashes and modification times accurate to 1 nS.

    -

    Restricted filename characters

    +

    Restricted filename characters

    The memory backend replaces the default restricted characters set.

    -

    Akamai NetStorage

    +

    Akamai NetStorage

    Paths are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.

    For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a CP code. [your-domain-prefix]-nsu.akamaihd.net

    See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config to walk you through the setup process.

    +

    Configuration

    Here's an example of how to make a remote called ns1.

    1. To begin the interactive configuration process, enter this command:
    2. @@ -19680,19 +22388,20 @@ e) Edit this remote d) Delete this remote y/e/d> y

      This remote is called ns1 and can now be used.

      -

      Example operations

      +

      Example operations

      Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.

      -
      See contents of a directory in your project
      +

      See contents of a directory in your project

      rclone lsd ns1:/974012/testing/
      -
      Sync the contents local with remote
      +

      Sync the contents local with remote

      rclone sync . ns1:/974012/testing/
      -
      Upload local content to remote
      +

      Upload local content to remote

      rclone copy notes.txt ns1:/974012/testing/
      -
      Delete content on remote
      +

      Delete content on remote

      rclone delete ns1:/974012/testing/notes.txt
      -
      Move or copy content between CP codes.
      +

      Move or copy content between CP codes.

      Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.

      rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
      +

      Features

      The Netstorage backend changes the rclone --links, -l behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.

      This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.

      @@ -19705,7 +22414,7 @@ y/e/d> y
    3. Implicit Directory. This refers to a directory within a path that has not been physically created. For example, during upload of a file, non-existent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.

    Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.

    -

    ListR Feature

    +

    --fast-list / ListR support

    NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.

    There are pros and cons of using the ListR method, refer to rclone documentation. In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.

    Note: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.

    -

    Purge Feature

    +

    Purge

    NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.

    Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.

    -

    Standard options

    -

    Here are the standard options specific to netstorage (Akamai NetStorage).

    +

    Standard options

    +

    Here are the Standard options specific to netstorage (Akamai NetStorage).

    --netstorage-host

    Domain+path of NetStorage host to connect to.

    Format should be <domain>/<internal folders>

    @@ -19748,8 +22457,8 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    -

    Here are the advanced options specific to netstorage (Akamai NetStorage).

    +

    Advanced options

    +

    Here are the Advanced options specific to netstorage (Akamai NetStorage).

    --netstorage-protocol

    Select between HTTP or HTTPS protocol.

    Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.

    @@ -19776,7 +22485,7 @@ y/e/d> y

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    -

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    See the backend command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    du

    Return disk usage information for a specified directory

    @@ -19788,7 +22497,7 @@ y/e/d> y

    The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>

    Microsoft Azure Blob Storage

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -19836,7 +22545,7 @@ y/e/d> y

    The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.

    Performance

    When uploading large files, increasing the value of --azureblob-upload-concurrency will increase performance at the cost of using more memory. The default of 16 is set quite conservatively to use less memory. It maybe be necessary raise it to 64 or higher to fully utilize a 1 GBit/s link with a single file transfer.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -19895,8 +22604,8 @@ container/

    Note that you can't see or access any other containers - this will fail

    rclone ls azureblob:othercontainer

    Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.

    -

    Standard options

    -

    Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).

    +

    Standard options

    +

    Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).

    --azureblob-account

    Storage Account Name.

    Leave blank to use SAS URL or Emulator.

    @@ -19963,8 +22672,8 @@ container/
  • Type: bool
  • Default: false
  • -

    Advanced options

    -

    Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).

    +

    Advanced options

    +

    Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).

    --azureblob-msi-object-id

    Object ID of the user-assigned MSI to use, if any.

    Leave blank if msi_client_id or msi_mi_res_id specified.

    @@ -20143,16 +22852,18 @@ container/
  • Type: bool
  • Default: false
  • -

    Limitations

    +

    Limitations

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    rclone about is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Azure Storage Emulator Support

    -

    You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator.

    +

    You can run rclone with storage emulator (usually azurite).

    +

    To do this, just set up a new remote with rclone config following instructions described in introduction and set use_emulator config as true. You do not need to provide default account name neither an account key.

    +

    Also, if you want to access a storage emulator instance running on a different machine, you can override Endpoint parameter in advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1 (e.g. http://10.254.2.5:10000/devstoreaccount1).

    Microsoft OneDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -20238,22 +22949,36 @@ y/e/d> y

    To copy a local directory to an OneDrive directory called backup

    rclone copy /home/source remote:backup

    Getting your own Client ID and Key

    -

    You can use your own Client ID if the default (client_id left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.

    -

    If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below:

    +

    rclone uses a default Client ID when talking to OneDrive, unless a custom client_id is specified in the config. The default Client ID and Key are shared by all rclone users when performing requests.

    +

    You may choose to create and use your own Client ID, in case the default one does not work well for you. For example, you might see throtting.

    +

    Creating Client ID for OneDrive Personal

    +

    To create your own Client ID, please follow these steps:

    1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click New registration.
    2. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI, then type (do not copy and paste) http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use.
    3. Under manage select Certificates & secrets, click New client secret. Enter a description (can be anything) and set Expires to 24 months. Copy and keep that secret Value for later use (you won't be able to see this value afterwards).
    4. Under manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions.
    5. -
    6. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read, and optionally Sites.Read.All (see below). Once selected click Add permissions at the bottom.
    7. +
    8. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read and Sites.Read.All (if custom access scopes are configured, select the permissions accordingly). Once selected click Add permissions at the bottom.

    Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.

    -

    The Sites.Read.All permission is required if you need to search SharePoint sites when configuring the remote. However, if that permission is not assigned, you need to set disable_site_permission option to true in the advanced options.

    +

    The access_scopes option allows you to configure the permissions requested by rclone. See Microsoft Docs for more information about the different scopes.

    +

    The Sites.Read.All permission is required if you need to search SharePoint sites when configuring the remote. However, if that permission is not assigned, you need to exclude Sites.Read.All from your access scopes or set disable_site_permission option to true in the advanced options.

    +

    Creating Client ID for OneDrive Business

    +

    The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. A common error is that the publisher of the App is not verified.

    +

    You may try to verify you account, or try to limit the App to your organization only, as shown below.

    +
      +
    1. Make sure to create the App with your business account.
    2. +
    3. Follow the steps above to create an App. However, we need a different account type here: Accounts in this organizational directory only (*** - Single tenant). Note that you can also change the account type aftering creating the App.
    4. +
    5. Find the tenant ID of your organization.
    6. +
    7. In the rclone config, set auth_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize.
    8. +
    9. In the rclone config, set token_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token.
    10. +
    +

    Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.

    Modification time and hashes

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.

    For all types of OneDrive you can use the --checksum flag.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -20353,8 +23078,8 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Deleting files

    Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

    -

    Standard options

    -

    Here are the standard options specific to onedrive (Microsoft OneDrive).

    +

    Standard options

    +

    Here are the Standard options specific to onedrive (Microsoft OneDrive).

    --onedrive-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -20403,8 +23128,8 @@ y/e/d> y -

    Advanced options

    -

    Here are the advanced options specific to onedrive (Microsoft OneDrive).

    +

    Advanced options

    +

    Here are the Advanced options specific to onedrive (Microsoft OneDrive).

    --onedrive-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -20472,6 +23197,32 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    --onedrive-access-scopes

    +

    Set scopes to be requested by rclone.

    +

    Choose or manually enter a custom space separated list with all scopes, that rclone should request.

    +

    Properties:

    +

    --onedrive-disable-site-permission

    Disable the request for Sites.Read.All permission.

    If set to true, you will no longer be able to search for a SharePoint site when configuring drive ID, because rclone will not request Sites.Read.All permission. Set it to true if your organization didn't assign Sites.Read.All permission to the application, and your organization disallows users to consent app permission request on their own.

    @@ -20591,7 +23342,7 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.

    Naming

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    @@ -20645,14 +23396,14 @@ rclone cleanup remote:path/subdir # unconditionally remove all old version fo
    --ignore-checksum --ignore-size

    Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.

    Replacing/deleting existing files on Sharepoint gets "item not found"

    -

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

    +

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

    --backup-dir mysharepoint:rclone-backup-dir

    access_denied (AADSTS65005)

    Error: access_denied
     Code: AADSTS65005
     Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

    This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

    -

    However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint

    +

    However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint

    invalid_grant (AADSTS50076)

    Error: invalid_grant
     Code: AADSTS50076
    @@ -20663,7 +23414,7 @@ Description: Due to a configuration change made by your administrator, or becaus
     

    OpenDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -20706,7 +23457,7 @@ y/e/d> y
    rclone copy /home/source remote:backup

    Modified time and MD5SUMs

    OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    Restricted filename characters

    +

    Restricted filename characters

    @@ -20806,8 +23557,8 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to opendrive (OpenDrive).

    +

    Standard options

    +

    Here are the Standard options specific to opendrive (OpenDrive).

    --opendrive-username

    Username.

    Properties:

    @@ -20827,8 +23578,8 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    -

    Here are the advanced options specific to opendrive (OpenDrive).

    +

    Advanced options

    +

    Here are the Advanced options specific to opendrive (OpenDrive).

    --opendrive-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -20849,14 +23600,14 @@ y/e/d> y
  • Type: SizeSuffix
  • Default: 10Mi
  • -

    Limitations

    +

    Limitations

    Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    QingStor

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making an QingStor configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -20948,11 +23699,11 @@ y/e/d> y -

    Restricted filename characters

    +

    Restricted filename characters

    The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to qingstor (QingCloud Object Storage).

    +

    Standard options

    +

    Here are the Standard options specific to qingstor (QingCloud Object Storage).

    --qingstor-env-auth

    Get QingStor credentials from runtime.

    Only applies if access_key_id and secret_access_key is blank.

    @@ -21032,8 +23783,8 @@ y/e/d> y -

    Advanced options

    -

    Here are the advanced options specific to qingstor (QingCloud Object Storage).

    +

    Advanced options

    +

    Here are the Advanced options specific to qingstor (QingCloud Object Storage).

    --qingstor-connection-retries

    Number of connection retries.

    Properties:

    @@ -21087,9 +23838,9 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,Ctl,InvalidUtf8
  • -

    Limitations

    +

    Limitations

    rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Sia

    Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.

    Introduction

    @@ -21097,7 +23848,7 @@ y/e/d> y

    rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.

    Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a sia remote called mySia. First, run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -21157,8 +23908,8 @@ y/e/d> y
  • Upload a local directory to the Sia directory called backup
  • rclone copy /home/source mySia:backup
    -

    Standard options

    -

    Here are the standard options specific to sia (Sia Decentralized Cloud).

    +

    Standard options

    +

    Here are the Standard options specific to sia (Sia Decentralized Cloud).

    --sia-api-url

    Sia daemon API URL, like http://sia.daemon.host:9980.

    Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.

    @@ -21180,8 +23931,8 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to sia (Sia Decentralized Cloud).

    +

    Advanced options

    +

    Here are the Advanced options specific to sia (Sia Decentralized Cloud).

    --sia-user-agent

    Siad User Agent

    Sia daemon requires the 'Sia-Agent' user agent by default for security

    @@ -21202,7 +23953,7 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a swift configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -21360,7 +24111,7 @@ rclone lsd myremote:

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    -

    Restricted filename characters

    +

    Restricted filename characters

    @@ -21383,8 +24134,8 @@ rclone lsd myremote:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    +

    Standard options

    +

    Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    --swift-env-auth

    Get swift credentials from environment variables in standard OpenStack form.

    Properties:

    @@ -21617,8 +24368,8 @@ rclone lsd myremote: -

    Advanced options

    -

    Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    +

    Advanced options

    +

    Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    --swift-leave-parts-on-error

    If true avoid calling abort upload on a failure.

    It should be set to true for resuming uploads across different sessions.

    @@ -21661,7 +24412,7 @@ rclone lsd myremote:
  • Type: MultiEncoder
  • Default: Slash,InvalidUtf8
  • -

    Limitations

    +

    Limitations

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    Troubleshooting

    Rclone gives Failed to create file system for "remote:": Bad Request

    @@ -21681,7 +24432,7 @@ rclone lsd myremote:

    pCloud

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -21733,10 +24484,10 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to a pCloud directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

    pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum flag.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -21757,14 +24508,16 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Deleting files

    Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

    +

    Emptying the trash

    +

    Due to an API limitation, the rclone cleanup command will only work if you set your username and password in the advanced options for this backend. Since we generally want to avoid storing user passwords in the rclone config file, we advise you to only set this up if you need the rclone cleanup command to work.

    Root folder ID

    You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive.

    Normally you will leave this blank and rclone will determine the correct root to use itself.

    However you can set this to restrict rclone to a specific folder hierarchy.

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

    So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

    -

    Standard options

    -

    Here are the standard options specific to pcloud (Pcloud).

    +

    Standard options

    +

    Here are the Standard options specific to pcloud (Pcloud).

    --pcloud-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -21785,8 +24538,8 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to pcloud (Pcloud).

    +

    Advanced options

    +

    Here are the Advanced options specific to pcloud (Pcloud).

    --pcloud-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -21856,10 +24609,30 @@ y/e/d> y +

    --pcloud-username

    +

    Your pcloud username.

    +

    This is only required when you want to use the cleanup command. Due to a bug in the pcloud API the required API does not support OAuth authentication so we have to rely on user password authentication for it.

    +

    Properties:

    + +

    --pcloud-password

    +

    Your pcloud password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    +

    premiumize.me

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -21909,9 +24682,9 @@ y/e/d>
    rclone ls remote:

    To copy a local directory to an premiumize.me directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -21935,8 +24708,8 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to premiumizeme (premiumize.me).

    +

    Standard options

    +

    Here are the Standard options specific to premiumizeme (premiumize.me).

    --premiumizeme-api-key

    API Key.

    This is not normally used - use oauth instead.

    @@ -21947,8 +24720,8 @@ y/e/d>
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to premiumizeme (premiumize.me).

    +

    Advanced options

    +

    Here are the Advanced options specific to premiumizeme (premiumize.me).

    --premiumizeme-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -21959,14 +24732,14 @@ y/e/d>
  • Type: MultiEncoder
  • Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

    premiumize.me only supports filenames up to 255 characters in length.

    put.io

    Paths are specified as remote:path

    put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -22029,7 +24802,7 @@ e/n/d/r/c/s/q> q
    rclone ls remote:

    To copy a local directory to a put.io directory called backup

    rclone copy /home/source remote:backup
    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -22048,8 +24821,8 @@ e/n/d/r/c/s/q> q

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Advanced options

    -

    Here are the advanced options specific to putio (Put.io).

    +

    Advanced options

    +

    Here are the Advanced options specific to putio (Put.io).

    --putio-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -22060,9 +24833,12 @@ e/n/d/r/c/s/q> q
  • Type: MultiEncoder
  • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
  • +

    Limitations

    +

    put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.

    +

    If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.

    Seafile

    This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users

    -

    Configuration

    +

    Configuration

    There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

    Configuration in root mode

    Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

    @@ -22221,7 +24997,7 @@ y/e/d> y
    rclone sync -i /home/local/directory seafile:

    --fast-list

    Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -22262,8 +25038,8 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    Compatibility

    It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition

    Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

    -

    Standard options

    -

    Here are the standard options specific to seafile (seafile).

    +

    Standard options

    +

    Here are the Standard options specific to seafile (seafile).

    --seafile-url

    URL of seafile host to connect to.

    Properties:

    @@ -22338,8 +25114,8 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to seafile (seafile).

    +

    Advanced options

    +

    Here are the Advanced options specific to seafile (seafile).

    --seafile-create-library

    Should rclone create a library if it doesn't exist.

    Properties:

    @@ -22363,13 +25139,14 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    SFTP is the Secure (or SSH) File Transfer Protocol.

    The SFTP backend can be used with a number of different providers:

    SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user cofigured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

    -

    Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.

    -

    Configuration

    +

    Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.

    +

    Note that by default rclone will try to execute shell commands on the server, see shell access considerations.

    +

    Configuration

    Here is an example of making an SFTP configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -22382,7 +25159,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / SSH/SFTP Connection +XX / SSH/SFTP \ "sftp" [snip] Storage> sftp @@ -22490,12 +25267,30 @@ known_hosts_file = ~/.ssh/known_hosts

    And then at the end of the session

    eval `ssh-agent -k`

    These commands can be used in scripts of course.

    +

    Shell access

    +

    Some functionality of the SFTP backend relies on remote shell access, and the possibility to execute commands. This includes checksum, and in some cases also about. The shell commands that must be executed may be different on different type of shells, and also quoting/escaping of file path arguments containing special characters may be different. Rclone therefore needs to know what type of shell it is, and if shell access is available at all.

    +

    Most servers run on some version of Unix, and then a basic Unix shell can be assumed, without further distinction. Windows 10, Server 2019, and later can also run a SSH server, which is a port of OpenSSH (see official installation guide). On a Windows server the shell handling is different: Although it can also be set up to use a Unix type shell, e.g. Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and PowerShell is a recommended alternative. All of these have bahave differently, which rclone must handle.

    +

    Rclone tries to auto-detect what type of shell is used on the server, first time you access the SFTP remote. If a remote shell session is successfully created, it will look for indications that it is CMD or PowerShell, with fall-back to Unix if not something else is detected. If unable to even create a remote shell session, then shell command execution will be disabled entirely. The result is stored in the SFTP remote configuration, in option shell_type, so that the auto-detection only have to be performed once. If you manually set a value for this option before first run, the auto-detection will be skipped, and if you set a different value later this will override any existing. Value none can be set to avoid any attempts at executing shell commands, e.g. if this is not allowed on the server.

    +

    When the server is rclone serve sftp, the rclone SFTP remote will detect this as a Unix type shell - even if it is running on Windows. This server does not actually have a shell, but it accepts input commands matching the specific ones that the SFTP backend relies on for Unix shells, e.g. md5sum and df. Also it handles the string escape rules used for Unix shell. Treating it as a Unix type shell from a SFTP remote will therefore always be correct, and support all features.

    +

    Shell access considerations

    +

    The shell type auto-detection logic, described above, means that by default rclone will try to run a shell command the first time a new sftp remote is accessed. If you configure a sftp remote without a config file, e.g. an on the fly remote, rclone will have nowhere to store the result, and it will re-run the command on every access. To avoid this you should explicitely set the shell_type option to the correct value, or to none if you want to prevent rclone from executing any remote shell commands.

    +

    It is also important to note that, since the shell type decides how quoting and escaping of file paths used as command-line arguments are performed, configuring the wrong shell type may leave you exposed to command injection exploits. Make sure to confirm the auto-detected shell type, or explicitely set the shell type you know is correct, or disable shell access until you know.

    +

    Checksum

    +

    SFTP does not natively support checksums (file hash), but rclone is able to use checksumming if the same login has shell access, and can execute remote commands. If there is a command that can calculate compatible checksums on the remote system, Rclone can then be configured to execute this whenever a checksum is needed, and read back the results. Currently MD5 and SHA-1 are supported.

    +

    Normally this requires an external utility being available on the server. By default rclone will try commands md5sum, md5 and rclone md5sum for MD5 checksums, and the first one found usable will be picked. Same with sha1sum, sha1 and rclone sha1sum commands for SHA-1 checksums. These utilities normally need to be in the remote's PATH to be found.

    +

    In some cases the shell itself is capable of calculating checksums. PowerShell is an example of such a shell. If rclone detects that the remote shell is PowerShell, which means it most probably is a Windows OpenSSH server, rclone will use a predefined script block to produce the checksums when no external checksum commands are found (see shell access). This assumes PowerShell version 4.0 or newer.

    +

    The options md5sum_command and sha1_command can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum as the value of option md5sum_command to make sure a specific executable is used.

    +

    Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command or sha1_command are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.

    +

    Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming entirely, or set shell_type to none to disable all functionality based on remote shell command execution.

    Modified time

    Modified times are stored on the server to 1 second precision.

    Modified times are used in syncing and are fully supported.

    Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

    -

    Standard options

    -

    Here are the standard options specific to sftp (SSH/SFTP Connection).

    +

    About command

    +

    The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.

    +

    SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.

    +

    Standard options

    +

    Here are the Standard options specific to sftp (SSH/SFTP).

    --sftp-host

    SSH host to connect to.

    E.g. "example.com".

    @@ -22627,8 +25422,8 @@ known_hosts_file = ~/.ssh/known_hosts
  • Type: bool
  • Default: false
  • -

    Advanced options

    -

    Here are the advanced options specific to sftp (SSH/SFTP Connection).

    +

    Advanced options

    +

    Here are the Advanced options specific to sftp (SSH/SFTP).

    --sftp-known-hosts-file

    Optional path to known_hosts file.

    Set this value to enable server host key validation.

    @@ -22658,11 +25453,11 @@ known_hosts_file = ~/.ssh/known_hosts
  • Default: false
  • --sftp-path-override

    -

    Override path used by SSH connection.

    +

    Override path used by SSH shell commands.

    This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.

    -

    Shared folders can be found in directories representing volumes

    +

    E.g. if shared folders can be found in directories representing volumes:

    rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory
    -

    Home directory can be found in a shared folder called "home"

    +

    E.g. if home directory can be found in a shared folder called "home":

    rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory

    Properties:

    +

    --sftp-shell-type

    +

    The type of SSH shell on remote server, if any.

    +

    Leave blank for autodetect.

    +

    Properties:

    +

    --sftp-md5sum-command

    The command used to read md5 hashes.

    Leave blank for autodetect.

    @@ -22775,21 +25599,58 @@ known_hosts_file = ~/.ssh/known_hosts
  • Type: Duration
  • Default: 1m0s
  • -

    Limitations

    -

    SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

    -

    SFTP also supports about if the same login has shell access and df are in the remote's PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote's PATH.

    -

    Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

    +

    --sftp-chunk-size

    +

    Upload and download chunk size.

    +

    This controls the maximum packet size used in the SFTP protocol. The RFC limits this to 32768 bytes (32k), however a lot of servers support larger sizes and setting it larger will increase transfer speed dramatically on high latency links.

    +

    Only use a setting higher than 32k if you always connect to the same server or after sufficiently broad testing.

    +

    For example using the value of 252k with OpenSSH works well with its maximum packet size of 256k.

    +

    If you get the error "failed to send packet header: EOF" when copying a large file, try lowering this number.

    +

    Properties:

    + +

    --sftp-concurrency

    +

    The maximum number of outstanding requests for one file

    +

    This controls the maximum number of outstanding requests for one file. Increasing it will increase throughput on high latency links at the cost of using more memory.

    +

    Properties:

    + +

    --sftp-set-env

    +

    Environment variables to pass to sftp and commands

    +

    Set environment variables in the form:

    +
    VAR=value
    +

    to be passed to the sftp client and to any commands run (eg md5sum).

    +

    Pass multiple variables space separated, eg

    +
    VAR1=value VAR2=value
    +

    and pass variables with spaces in in quotes, eg

    +
    "VAR3=value with space" "VAR4=value with space" VAR5=nospacehere
    +

    Properties:

    + +

    Limitations

    +

    On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

    The only ssh agent supported under Windows is Putty's pageant.

    The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found in this paper.

    SFTP isn't supported under plan9 until this issue is fixed.

    -

    Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    +

    Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth.

    Note that --timeout and --contimeout are both supported.

    -

    C14

    -

    C14 is supported through the SFTP backend.

    -

    See C14's documentation

    rsync.net

    rsync.net is supported through the SFTP backend.

    See rsync.net's documentation of rclone examples.

    +

    Hetzner Storage Box

    +

    Hetzner Storage Boxes are supported through the SFTP backend on port 23.

    +

    See Hetzner's documentation for details

    Storj

    Storj is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.

    Backend options

    @@ -22849,7 +25710,7 @@ known_hosts_file = ~/.ssh/known_hosts
  • S3 backend: secret encryption key is shared with the gateway
  • -

    Configuration

    +

    Configuration

    To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -22946,8 +25807,8 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

    Standard options

    -

    Here are the standard options specific to storj (Storj Decentralized Cloud Storage).

    +

    Standard options

    +

    Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

    --storj-provider

    Choose an authentication method.

    Properties:

    @@ -23079,15 +25940,15 @@ y/e/d> y
    rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

    Or even between another cloud storage and Storj.

    rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
    -

    Limitations

    +

    Limitations

    rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Known issues

    If you get errors like too many open files this usually happens when the default ulimit for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).

    To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.

    SugarSync

    SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

    -

    Configuration

    +

    Configuration

    The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -23152,16 +26013,16 @@ y/e/d> y

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.

    -

    Modified time and hashes

    +

    Modified time and hashes

    SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.

    -

    Restricted filename characters

    +

    Restricted filename characters

    SugarSync replaces the default restricted characters set except for DEL.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    Deleting files

    Deleted files will be moved to the "Deleted items" folder by default.

    However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

    -

    Standard options

    -

    Here are the standard options specific to sugarsync (Sugarsync).

    +

    Standard options

    +

    Here are the Standard options specific to sugarsync (Sugarsync).

    --sugarsync-app-id

    Sugarsync App ID.

    Leave blank to use rclone's.

    @@ -23201,8 +26062,8 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced options

    -

    Here are the advanced options specific to sugarsync (Sugarsync).

    +

    Advanced options

    +

    Here are the Advanced options specific to sugarsync (Sugarsync).

    --sugarsync-refresh-token

    Sugarsync refresh token.

    Leave blank normally, will be auto configured by rclone.

    @@ -23273,16 +26134,16 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    +

    See List of backends that do not support rclone about and rclone about

    Tardigrade

    The Tardigrade backend has been renamed to be the Storj backend. Old configuration files will continue to work.

    Uptobox

    This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config
    @@ -23336,9 +26197,9 @@ y/e/d>
    rclone ls remote:

    To copy a local directory to an Uptobox directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    Uptobox supports neither modified times nor checksums.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -23362,8 +26223,8 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    -

    Here are the standard options specific to uptobox (Uptobox).

    +

    Standard options

    +

    Here are the Standard options specific to uptobox (Uptobox).

    --uptobox-access-token

    Your access token.

    Get it from https://uptobox.com/my_account.

    @@ -23374,8 +26235,8 @@ y/e/d>
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to uptobox (Uptobox).

    +

    Advanced options

    +

    Here are the Advanced options specific to uptobox (Uptobox).

    --uptobox-encoding

    The encoding for the backend.

    See the encoding section in the overview for more info.

    @@ -23386,7 +26247,7 @@ y/e/d>
  • Type: MultiEncoder
  • Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    Uptobox will delete inactive files that have not been accessed in 60 days.

    rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.

    Union

    @@ -23396,7 +26257,7 @@ y/e/d>

    Attribute :ro and :nc can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

    Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

    There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a union called remote for local folders. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -23617,8 +26478,8 @@ e/n/d/r/c/s/q> q -

    Standard options

    -

    Here are the standard options specific to union (Union merges the contents of several upstream fs).

    +

    Standard options

    +

    Here are the Standard options specific to union (Union merges the contents of several upstream fs).

    --union-upstreams

    List of space separated upstreams.

    Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.

    @@ -23666,10 +26527,25 @@ e/n/d/r/c/s/q> q
  • Type: int
  • Default: 120
  • +

    Advanced options

    +

    Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

    +

    --union-min-free-space

    +

    Minimum viable free space for lfs/eplfs policies.

    +

    If a remote has less than this much free space then it won't be considered for use in lfs or eplfs policies.

    +

    Properties:

    + +

    Metadata

    +

    Any metadata supported by the underlying remote is read and written.

    +

    See the metadata docs for more info.

    WebDAV

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -23683,7 +26559,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Webdav +XX / WebDAV \ "webdav" [snip] Storage> webdav @@ -23692,7 +26568,7 @@ Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ -Name of the Webdav site/service/software you are using +Name of the WebDAV site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" @@ -23739,11 +26615,11 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an WebDAV directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.

    Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

    -

    Standard options

    -

    Here are the standard options specific to webdav (Webdav).

    +

    Standard options

    +

    Here are the Standard options specific to webdav (WebDAV).

    --webdav-url

    URL of http host to connect to.

    E.g. https://example.com.

    @@ -23755,7 +26631,7 @@ y/e/d> y
  • Required: true
  • --webdav-vendor

    -

    Name of the Webdav site/service/software you are using.

    +

    Name of the WebDAV site/service/software you are using.

    Properties:

    -

    Advanced options

    -

    Here are the advanced options specific to webdav (Webdav).

    +

    Advanced options

    +

    Here are the Advanced options specific to webdav (WebDAV).

    --webdav-bearer-token-command

    Command to run to get a bearer token.

    Properties:

    @@ -23922,7 +26798,7 @@ vendor = other bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    -

    Configuration

    +

    Configuration

    Here is an example of making a yandex configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -23983,11 +26859,11 @@ y/e/d> y

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    The default restricted characters set are replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    -

    Here are the standard options specific to yandex (Yandex Disk).

    +

    Standard options

    +

    Here are the Standard options specific to yandex (Yandex Disk).

    --yandex-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -24008,8 +26884,8 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    -

    Here are the advanced options specific to yandex (Yandex Disk).

    +

    Advanced options

    +

    Here are the Advanced options specific to yandex (Yandex Disk).

    --yandex-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -24058,13 +26934,13 @@ y/e/d> y
  • Type: MultiEncoder
  • Default: Slash,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    Limitations

    When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

    Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.

    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

    Zoho Workdrive

    Zoho WorkDrive is a cloud storage solution created by Zoho.

    -

    Configuration

    +

    Configuration

    Here is an example of making a zoho configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -24142,10 +27018,10 @@ y/e/d>

    No checksums are supported.

    Usage information

    To view your current quota you can use the rclone about remote: command which will display your current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

    -

    Standard options

    -

    Here are the standard options specific to zoho (Zoho).

    +

    Standard options

    +

    Here are the Standard options specific to zoho (Zoho).

    --zoho-client-id

    OAuth Client Id.

    Leave blank normally.

    @@ -24189,14 +27065,22 @@ y/e/d> +
  • "jp" +
  • +
  • "com.cn" +
  • "com.au"
  • -

    Advanced options

    -

    Here are the advanced options specific to zoho (Zoho).

    +

    Advanced options

    +

    Here are the Advanced options specific to zoho (Zoho).

    --zoho-token

    OAuth Access Token as a JSON blob.

    Properties:

    @@ -24236,11 +27120,19 @@ y/e/d>
  • Type: MultiEncoder
  • Default: Del,Ctl,InvalidUtf8
  • +

    Setting up your own client_id

    +

    For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps.

    +
      +
    1. Log in to the Zoho API Console

    2. +
    3. Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL http://localhost:53682/.

    4. +
    5. Once the client is created, you can go to the settings tab and enable it in other regions.

    6. +
    +

    The client id and client secret can now be used with rclone.

    Local Filesystem

    Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

    rclone sync -i /home/source /tmp/destination

    Will sync /home/source to /tmp/destination.

    -

    Configuration

    +

    Configuration

    For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.

    Modified time

    Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    @@ -24611,16 +27503,16 @@ $ tree /tmp/b 0 file2

    NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

    NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.

    -

    Advanced options

    -

    Here are the advanced options specific to local (Local Disk).

    +

    Advanced options

    +

    Here are the Advanced options specific to local (Local Disk).

    --local-nounc

    Disable UNC (long path names) conversion on Windows.

    Properties:

    Changelog

    +

    v1.59.0 - 2022-07-09

    +

    See commits

    + +

    v1.58.1 - 2022-04-29

    +

    See commits

    +

    v1.58.0 - 2022-03-18

    See commits

    Bugs and Limitations

    -

    Limitations

    +

    Limitations

    Directory timestamps aren't preserved

    Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.

    Rclone struggles with millions of files in a directory/bucket

    @@ -30474,7 +33749,7 @@ $ tree /tmp/b

    Rclone can sync between two remote cloud storage systems just fine.

    Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.

    The syncs would be incremental (on a file by file basis).

    -

    Eg

    +

    e.g.

    rclone sync -i drive:Folder s3:bucket

    Using rclone from multiple locations at the same time

    You can use rclone from multiple places at the same time if you choose different subdirectory for the output, e.g.

    @@ -30504,7 +33779,7 @@ export HTTPS_PROXY=$http_proxy

    e.g.

    export no_proxy=localhost,127.0.0.0/8,my.host.name
     export NO_PROXY=$no_proxy
    -

    Note that the ftp backend does not support ftp_proxy yet.

    +

    Note that the FTP backend does not support ftp_proxy yet.

    Rclone gives x509: failed to load system roots and no roots provided error

    This means that rclone can't find the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.

    Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.

    @@ -31127,6 +34402,57 @@ THE SOFTWARE.
  • Vincent Murphy vdm@vdm.ie
  • ctrl-q 34975747+ctrl-q@users.noreply.github.com
  • Nil Alexandrov nalexand@akamai.com
  • +
  • GuoXingbin 101376330+guoxingbin@users.noreply.github.com
  • +
  • Berkan Teber berkan@berkanteber.com
  • +
  • Tobias Klauser tklauser@distanz.ch
  • +
  • KARBOWSKI Piotr piotr.karbowski@gmail.com
  • +
  • GH geeklihui@foxmail.com
  • +
  • rafma0 int.main@gmail.com
  • +
  • Adrien Rey-Jarthon jobs@adrienjarthon.com
  • +
  • Nick Gooding 73336146+nickgooding@users.noreply.github.com
  • +
  • Leroy van Logchem lr.vanlogchem@gmail.com
  • +
  • Zsolt Ero zsolt.ero@gmail.com
  • +
  • Lesmiscore nao20010128@gmail.com
  • +
  • ehsantdy ehsan.tadayon@arvancloud.com
  • +
  • SwazRGB 65694696+swazrgb@users.noreply.github.com
  • +
  • Mateusz Puczyński mati6095@gmail.com
  • +
  • Michael C Tiernan - MIT-Research Computing Project mtiernan@mit.edu
  • +
  • Kaspian 34658474+KaspianDev@users.noreply.github.com
  • +
  • Werner EvilOlaf@users.noreply.github.com
  • +
  • Hugal31 hugo.laloge@gmail.com
  • +
  • Christian Galo 36752715+cgalo5758@users.noreply.github.com
  • +
  • Erik van Velzen erik@evanv.nl
  • +
  • Derek Battams derek@battams.ca
  • +
  • SimonLiu simonliu009@users.noreply.github.com
  • +
  • Hugo Laloge hla@lescompanions.com
  • +
  • Mr-Kanister 68117355+Mr-Kanister@users.noreply.github.com
  • +
  • Rob Pickerill r.pickerill@gmail.com
  • +
  • Andrey to.merge@gmail.com
  • +
  • Eric Wolf 19wolf@gmail.com
  • +
  • Nick nick.naumann@mailbox.tu-dresden.de
  • +
  • Jason Zheng jszheng17@gmail.com
  • +
  • Matthew Vernon mvernon@wikimedia.org
  • +
  • Noah Hsu i@nn.ci
  • +
  • m00594701 mengpengbo@huawei.com
  • +
  • Art M. Gallagher artmg50@gmail.com
  • +
  • Sven Gerber 49589423+svengerber@users.noreply.github.com
  • +
  • CrossR r.cross@lancaster.ac.uk
  • +
  • Maciej Radzikowski maciej@radzikowski.com.pl
  • +
  • Scott Grimes scott.grimes@spaciq.com
  • +
  • Phil Shackleton 71221528+philshacks@users.noreply.github.com
  • +
  • eNV25 env252525@gmail.com
  • +
  • Caleb inventor96@users.noreply.github.com
  • +
  • J-P Treen jp@wraptious.com
  • +
  • Martin Czygan 53705+miku@users.noreply.github.com
  • +
  • buda sandrojijavadze@protonmail.com
  • +
  • mirekphd 36706320+mirekphd@users.noreply.github.com
  • +
  • vyloy vyloy@qq.com
  • +
  • Anthrazz 25553648+Anthrazz@users.noreply.github.com
  • +
  • zzr93 34027824+zzr93@users.noreply.github.com
  • +
  • Paul Norman penorman@mac.com
  • +
  • Lorenzo Maiorfi maiorfi@gmail.com
  • +
  • Claudio Maradonna penguyman@stronzi.org
  • +
  • Ovidiu Victor Tatar ovi.tatar@googlemail.com
  • Contact the rclone project

    Forum

    diff --git a/MANUAL.md b/MANUAL.md index 58e4ddad6..6cd4e9604 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Mar 18, 2022 +% Jul 09, 2022 # Rclone syncs your files to cloud storage @@ -92,7 +92,7 @@ Rclone helps you: - [Move](https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after verification - [Check](https://rclone.org/commands/rclone_check/) hashes and for missing/extra files - [Mount](https://rclone.org/commands/rclone_mount/) your cloud storage as a network disk -- [Serve](https://rclone.org/commands/rclone_serve/) local or remote files over [HTTP](https://rclone.org/commands/rclone_serve_http/)/[WebDav](https://rclone.org/commands/rclone_serve_webdav/)/[FTP](https://rclone.org/commands/rclone_serve_ftp/)/[SFTP](https://rclone.org/commands/rclone_serve_sftp/)/[dlna](https://rclone.org/commands/rclone_serve_dlna/) +- [Serve](https://rclone.org/commands/rclone_serve/) local or remote files over [HTTP](https://rclone.org/commands/rclone_serve_http/)/[WebDav](https://rclone.org/commands/rclone_serve_webdav/)/[FTP](https://rclone.org/commands/rclone_serve_ftp/)/[SFTP](https://rclone.org/commands/rclone_serve_sftp/)/[DLNA](https://rclone.org/commands/rclone_serve_dlna/) - Experimental [Web based GUI](https://rclone.org/gui/) ## Supported providers {#providers} @@ -109,8 +109,11 @@ WebDAV or S3, that work out of the box.) - Backblaze B2 - Box - Ceph +- China Mobile Ecloud Elastic Object Storage (EOS) +- Arvan Cloud Object Storage (AOS) - Citrix ShareFile - C14 +- Cloudflare R2 - DigitalOcean Spaces - Digi Storage - Dreamhost @@ -121,10 +124,14 @@ WebDAV or S3, that work out of the box.) - Google Drive - Google Photos - HDFS +- Hetzner Storage Box +- HiDrive - HTTP - Hubic +- Internet Archive - Jottacloud - IBM COS S3 +- IDrive e2 - Koofr - Mail.ru Cloud - Memset Memstore @@ -163,18 +170,32 @@ WebDAV or S3, that work out of the box.) - The local filesystem -Links +## Virtual providers + +These backends adapt or modify other storage providers: + +- Alias: Rename existing remotes +- Cache: Cache remotes (DEPRECATED) +- Chunker: Split large files +- Combine: Combine multiple remotes into a directory tree +- Compress: Compress files +- Crypt: Encrypt files +- Hasher: Hash files +- Union: Join multiple remotes to work together + + +## Links * [Home page](https://rclone.org/) * [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) * [Rclone Forum](https://forum.rclone.org) * [Downloads](https://rclone.org/downloads/) -# Install # +# Install Rclone is a Go program and comes as a single binary file. -## Quickstart ## +## Quickstart * [Download](https://rclone.org/downloads/) the relevant binary. * Extract the `rclone` executable, `rclone.exe` on Windows, from the archive. @@ -189,20 +210,20 @@ run `rclone -h`. Already installed rclone can be easily updated to the latest version using the [rclone selfupdate](https://rclone.org/commands/rclone_selfupdate/) command. -## Script installation ## +## Script installation To install rclone on Linux/macOS/BSD systems, run: - curl https://rclone.org/install.sh | sudo bash + sudo -v ; curl https://rclone.org/install.sh | sudo bash For beta installation, run: - curl https://rclone.org/install.sh | sudo bash -s beta + sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. -## Linux installation from precompiled binary ## +## Linux installation from precompiled binary Fetch and unpack @@ -226,7 +247,7 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) rclone config -## macOS installation with brew ## +## macOS installation with brew brew install rclone @@ -235,7 +256,7 @@ NOTE: This version of rclone will not support `mount` any more (see on macOS, either install a precompiled binary or enable the relevant option when [installing from source](#install-from-source). -## macOS installation from precompiled binary, using curl ## +## macOS installation from precompiled binary, using curl To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with `curl`. @@ -263,20 +284,20 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) rclone config -## macOS installation from precompiled binary, using a web browser ## +## macOS installation from precompiled binary, using a web browser When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run `rclone`, a pop-up will appear saying: - “rclone” cannot be opened because the developer cannot be verified. + "rclone" cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. The simplest fix is to run xattr -d com.apple.quarantine rclone -## Install with docker ## +## Install with docker The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone). These images are autobuilt by docker hub from the rclone source based @@ -355,39 +376,93 @@ ls ~/data/mount kill %1 ``` -## Install from source ## +## Install from source -Make sure you have at least [Go](https://golang.org/) go1.15 -installed. [Download go](https://golang.org/dl/) if necessary. The -latest release is recommended. Then +Make sure you have git and [Go](https://golang.org/) installed. +Go version 1.16 or newer is required, latest release is recommended. +You can get it from your package manager, or download it from +[golang.org/dl](https://golang.org/dl/). Then you can run the following: -```sh +``` git clone https://github.com/rclone/rclone.git cd rclone go build -# If on macOS and mount is wanted, instead run: make GOTAGS=cmount -./rclone version ``` -This will leave you a checked out version of rclone you can modify and -send pull requests with. If you use `make` instead of `go build` then -the rclone build will have the correct version information in it. +This will check out the rclone source in subfolder rclone, which you can later +modify and send pull requests with. Then it will build the rclone executable +in the same folder. As an initial check you can now run `./rclone version` +(`.\rclone version` on Windows). -You can also build the latest stable rclone with: +Note that on macOS and Windows the [mount](https://rclone.org/commands/rclone_mount/) +command will not be available unless you specify additional build tag `cmount`. - go get github.com/rclone/rclone +``` +go build -tags cmount +``` -or the latest version (equivalent to the beta) with +This assumes you have a GCC compatible C compiler (GCC or Clang) in your PATH, +as it uses [cgo](https://pkg.go.dev/cmd/cgo). But on Windows, the +[cgofuse](https://github.com/winfsp/cgofuse) library that the cmount +implementation is based on, also supports building +[without cgo](https://github.com/golang/go/wiki/WindowsDLLs), i.e. by setting +environment variable CGO_ENABLED to value 0 (static linking). This is how the +official Windows release of rclone is being built, starting with version 1.59. +It is still possible to build with cgo on Windows as well, by using the MinGW +port of GCC, e.g. by installing it in a [MSYS2](https://www.msys2.org) +distribution (make sure you install it in the classic mingw64 subsystem, the +ucrt64 version is not compatible). - go get github.com/rclone/rclone@master +Additionally, on Windows, you must install the third party utility +[WinFsp](http://www.secfs.net/winfsp/), with the "Developer" feature selected. +If building with cgo, you must also set environment variable CPATH pointing to +the fuse include directory within the WinFsp installation +(normally `C:\Program Files (x86)\WinFsp\inc\fuse`). -These will build the binary in `$(go env GOPATH)/bin` -(`~/go/bin/rclone` by default) after downloading the source to the go -module cache. Note - do **not** use the `-u` flag here. This causes go -to try to update the dependencies that rclone uses and sometimes these -don't work with the current version of rclone. +You may also add arguments `-ldflags -s` (with or without `-tags cmount`), +to omit symbol table and debug information, making the executable file smaller, +and `-trimpath` to remove references to local file system paths. This is how +the official rclone releases are built. -## Installation with Ansible ## +``` +go build -trimpath -ldflags -s -tags cmount +``` + +Instead of executing the `go build` command directly, you can run it via the +Makefile, which also sets version information and copies the resulting rclone +executable into your GOPATH bin folder (`$(go env GOPATH)/bin`, which +corresponds to `~/go/bin/rclone` by default). + +``` +make +``` + +To include mount command on macOS and Windows with Makefile build: + +``` +make GOTAGS=cmount +``` + +As an alternative you can download the source, build and install rclone in one +operation, as a regular Go package. The source will be stored it in the Go +module cache, and the resulting executable will be in your GOPATH bin folder +(`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default). + +With Go version 1.17 or newer: + +``` +go install github.com/rclone/rclone@latest +``` + +With Go versions older than 1.17 (do **not** use the `-u` flag, it causes Go to +try to update the dependencies that rclone uses and sometimes these don't work +with the current version): + +``` +go get github.com/rclone/rclone +``` + +## Installation with Ansible This can be done with [Stefan Weichinger's ansible role](https://github.com/stefangweichinger/ansible-rclone). @@ -403,7 +478,7 @@ Instructions - rclone ``` -## Portable installation ## +## Portable installation As mentioned [above](https://rclone.org/install/#quickstart), rclone is single executable (`rclone`, or `rclone.exe` on Windows) that you can download as a @@ -481,7 +556,7 @@ the [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec) utility from Microsoft's Sysinternals suite, which takes option `-s` to execute commands as the `SYSTEM` user. -#### Start from Startup folder ### +#### Start from Startup folder To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you @@ -496,7 +571,7 @@ functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results. -#### Start from Task Scheduler ### +#### Start from Task Scheduler Task Scheduler is an administrative tool built into Windows, and it can be used to configure rclone to be started automatically in a highly configurable way, e.g. @@ -506,14 +581,14 @@ be available to all users it can run as the `SYSTEM` user. For technical information, see https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page. -#### Run as service ### +#### Run as service For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup. -##### Mount command built-in service integration #### +##### Mount command built-in service integration -For mount commands, Rclone has a built-in Windows service integration via the third-party +For mount commands, rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command `New-Service` (requires administrative privileges). @@ -533,7 +608,7 @@ Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340). -##### Third-party service integration ##### +##### Third-party service integration To Windows service running any rclone command, the excellent third-party utility [NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used. @@ -601,6 +676,7 @@ See the following for detailed instructions for * [Chunker](https://rclone.org/chunker/) - transparently splits large files for other remotes * [Citrix ShareFile](https://rclone.org/sharefile/) * [Compress](https://rclone.org/compress/) + * [Combine](https://rclone.org/combine/) * [Crypt](https://rclone.org/crypt/) - to encrypt other remotes * [DigitalOcean Spaces](https://rclone.org/s3/#digitalocean-spaces) * [Digi Storage](https://rclone.org/koofr/#digi-storage) @@ -612,8 +688,10 @@ See the following for detailed instructions for * [Google Photos](https://rclone.org/googlephotos/) * [Hasher](https://rclone.org/hasher/) - to handle checksums for other remotes * [HDFS](https://rclone.org/hdfs/) + * [HiDrive](https://rclone.org/hidrive/) * [HTTP](https://rclone.org/http/) * [Hubic](https://rclone.org/hubic/) + * [Internet Archive](https://rclone.org/internetarchive/) * [Jottacloud](https://rclone.org/jottacloud/) * [Koofr](https://rclone.org/koofr/) * [Mail.ru Cloud](https://rclone.org/mailru/) @@ -715,13 +793,18 @@ Copy files from source to dest, skipping identical files. Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification -time or MD5SUM. Doesn't delete files from the destination. +time or MD5SUM. Doesn't delete files from the destination. If you +want to also delete files from destination, to make it match source, +use the [sync](https://rclone.org/commands/rclone_sync/) command instead. Note that it is always the contents of the directory that is synced, -not the directory so when source:path is a directory, it's the +not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. +To copy single files, use the [copyto](https://rclone.org/commands/rclone_copyto/) +command instead. + If dest:path doesn't exist, it is created and the source:path contents go there. @@ -793,7 +876,9 @@ Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files -if necessary (except duplicate objects, see below). +if necessary (except duplicate objects, see below). If you don't +want to delete files from destination, use the +[copy](https://rclone.org/commands/rclone_copy/) command instead. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. @@ -805,9 +890,9 @@ errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled. It is always the contents of the directory that is synced, not the -directory so when source:path is a directory, it's the contents of +directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See -extended explanation in the `copy` command above if unsure. +extended explanation in the [copy](https://rclone.org/commands/rclone_copy/) command if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. @@ -846,6 +931,9 @@ Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation. +To move single files, use the [moveto](https://rclone.org/commands/rclone_moveto/) +command instead. + If no filters are in use and if possible this will server-side move `source:path` into `dest:path`. After this `source:path` will no longer exist. @@ -856,7 +944,8 @@ move will be used, otherwise it will copy it (server-side if possible) into `dest:path` then delete the original (if no errors on copy) in `source:path`. -If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. +If you want to delete empty source directories after move, use the +`--delete-empty-src-dirs` flag. See the [--no-traverse](https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this @@ -894,16 +983,16 @@ Remove the files in path. ## Synopsis -Remove the files in path. Unlike `purge` it obeys include/exclude -filters so can be used to selectively delete files. +Remove the files in path. Unlike [purge](https://rclone.org/commands/rclone_purge/) it +obeys include/exclude filters so can be used to selectively delete files. `rclone delete` only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use -the `purge` command. +the [purge](https://rclone.org/commands/rclone_purge/) command. If you supply the `--rmdirs` flag, it will remove all empty directories along with it. -You can also use the separate command `rmdir` or `rmdirs` to -delete empty directories only. +You can also use the separate command [rmdir](https://rclone.org/commands/rclone_rmdir/) or +[rmdirs](https://rclone.org/commands/rclone_rmdirs/) to delete empty directories only. For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either): @@ -947,9 +1036,10 @@ Remove the path and all of its contents. Remove the path and all of its contents. Note that this does not obey -include/exclude filters - everything will be removed. Use the `delete` -command if you want to selectively delete files. To delete empty directories only, -use command `rmdir` or `rmdirs`. +include/exclude filters - everything will be removed. Use the +[delete](https://rclone.org/commands/rclone_delete/) command if you want to selectively +delete files. To delete empty directories only, use command +[rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/). **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. @@ -1000,10 +1090,10 @@ Remove the empty directory at path. This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use -command `rmdirs` (or `delete` with option `--rmdirs`) -to do that. +command [rmdirs](https://rclone.org/commands/rclone_rmdirs/) (or [delete](https://rclone.org/commands/rclone_delete/) +with option `--rmdirs`) to do that. -To delete a path and any objects in it, use `purge` command. +To delete a path and any objects in it, use [purge](https://rclone.org/commands/rclone_purge/) command. ``` @@ -1033,6 +1123,10 @@ Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination. +For the [crypt](https://rclone.org/crypt/) remote there is a dedicated command, +[cryptcheck](https://rclone.org/commands/rclone_cryptcheck/), that are able to check +the checksums of the crypted files. + If you supply the `--size-only` flag, it will only compare the sizes not the hashes as well. Use this for a quick check. @@ -1157,7 +1251,7 @@ List all directories/containers/buckets in the path. Lists the directories in the source path to standard output. Does not -recurse by default. Use the -R flag to recurse. +recurse by default. Use the `-R` flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the @@ -1175,7 +1269,7 @@ Or -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files -If you just want the directory names use "rclone lsf --dirs-only". +If you just want the directory names use `rclone lsf --dirs-only`. Any of the filtering options can be applied to this command. @@ -1291,6 +1385,10 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote. +For other algorithms, see the [hashsum](https://rclone.org/commands/rclone_hashsum/) +command. Running `rclone md5sum remote:path` is equivalent +to running `rclone hashsum MD5 remote:path`. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, @@ -1332,6 +1430,10 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote. +For other algorithms, see the [hashsum](https://rclone.org/commands/rclone_hashsum/) +command. Running `rclone sha1sum remote:path` is equivalent +to running `rclone hashsum SHA1 remote:path`. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, @@ -1365,6 +1467,28 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Prints the total size and number of objects in remote:path. +## Synopsis + + +Counts objects in the path and calculates the total size. Prints the +result to standard output. + +By default the output is in human-readable format, but shows values in +both human-readable format as well as the raw numbers (global option +`--human-readable` is not considered). Use option `--json` +to format output as JSON instead. + +Recurses by default, use `--max-depth 1` to stop the +recursion. + +Some backends do not always provide file sizes, see for example +[Google Photos](https://rclone.org/googlephotos/#size) and +[Google Drive](https://rclone.org/drive/#limitations-of-google-docs). +Rclone will then show a notice in the log indicating how many such +files were encountered, and count them in as empty files in the output +of the size command. + + ``` rclone size remote:path [flags] ``` @@ -1488,7 +1612,7 @@ Opendrive) that can have duplicate file names. It can be run on wrapping backend (e.g. crypt) if they wrap a backend which supports duplicate file names. -However if --by-hash is passed in then dedupe will find files with +However if `--by-hash` is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash. @@ -1916,11 +2040,10 @@ See the [global flags page](https://rclone.org/flags/) for global options not li # rclone completion -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell ## Synopsis - Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script. @@ -1936,34 +2059,38 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -* [rclone completion bash](https://rclone.org/commands/rclone_completion_bash/) - generate the autocompletion script for bash -* [rclone completion fish](https://rclone.org/commands/rclone_completion_fish/) - generate the autocompletion script for fish -* [rclone completion powershell](https://rclone.org/commands/rclone_completion_powershell/) - generate the autocompletion script for powershell -* [rclone completion zsh](https://rclone.org/commands/rclone_completion_zsh/) - generate the autocompletion script for zsh +* [rclone completion bash](https://rclone.org/commands/rclone_completion_bash/) - Generate the autocompletion script for bash +* [rclone completion fish](https://rclone.org/commands/rclone_completion_fish/) - Generate the autocompletion script for fish +* [rclone completion powershell](https://rclone.org/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell +* [rclone completion zsh](https://rclone.org/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh # rclone completion bash -generate the autocompletion script for bash +Generate the autocompletion script for bash ## Synopsis - Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: -$ source <(rclone completion bash) + + source <(rclone completion bash) To load completions for every new session, execute once: -Linux: - $ rclone completion bash > /etc/bash_completion.d/rclone -MacOS: - $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone + +### Linux: + + rclone completion bash > /etc/bash_completion.d/rclone + +### macOS: + + rclone completion bash > /usr/local/etc/bash_completion.d/rclone You will need to start a new shell for this setup to take effect. - + ``` rclone completion bash @@ -1980,22 +2107,23 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## SEE ALSO -* [rclone completion](https://rclone.org/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell # rclone completion fish -generate the autocompletion script for fish +Generate the autocompletion script for fish ## Synopsis - Generate the autocompletion script for the fish shell. To load completions in your current shell session: -$ rclone completion fish | source + + rclone completion fish | source To load completions for every new session, execute once: -$ rclone completion fish > ~/.config/fish/completions/rclone.fish + + rclone completion fish > ~/.config/fish/completions/rclone.fish You will need to start a new shell for this setup to take effect. @@ -2015,19 +2143,19 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## SEE ALSO -* [rclone completion](https://rclone.org/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell # rclone completion powershell -generate the autocompletion script for powershell +Generate the autocompletion script for powershell ## Synopsis - Generate the autocompletion script for powershell. To load completions in your current shell session: -PS C:\> rclone completion powershell | Out-String | Invoke-Expression + + rclone completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. @@ -2048,27 +2176,30 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## SEE ALSO -* [rclone completion](https://rclone.org/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell # rclone completion zsh -generate the autocompletion script for zsh +Generate the autocompletion script for zsh ## Synopsis - Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: -$ echo "autoload -U compinit; compinit" >> ~/.zshrc + echo "autoload -U compinit; compinit" >> ~/.zshrc To load completions for every new session, execute once: -# Linux: -$ rclone completion zsh > "${fpath[1]}/_rclone" -# macOS: -$ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone + +### Linux: + + rclone completion zsh > "${fpath[1]}/_rclone" + +### macOS: + + rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone You will need to start a new shell for this setup to take effect. @@ -2088,7 +2219,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## SEE ALSO -* [rclone completion](https://rclone.org/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell # rclone config create @@ -2656,8 +2787,8 @@ If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current -name. If the source is a directory then it acts exactly like the copy -command. +name. If the source is a directory then it acts exactly like the +[copy](https://rclone.org/commands/rclone_copy/) command. So @@ -2707,10 +2838,11 @@ Copy url content to dest. Download a URL's content and copy it to the destination without saving it in temporary storage. -Setting `--auto-filename` will cause the file name to be retrieved from -the URL (after any redirections) and used in the destination -path. With `--print-filename` in addition, the resulting file name will -be printed. +Setting `--auto-filename` will attempt to automatically determine the filename from the URL +(after any redirections) and used in the destination path. +With `--auto-filename-header` in +addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. +With `--print-filename` in addition, the resulting file name will be printed. Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. @@ -2726,11 +2858,12 @@ rclone copyurl https://example.com dest:path [flags] ## Options ``` - -a, --auto-filename Get the file name from the URL and use it for destination file path - -h, --help help for copyurl - --no-clobber Prevent overwriting file with same name - -p, --print-filename Print the resulting name from --auto-filename - --stdout Write the output to stdout rather than a file + -a, --auto-filename Get the file name from the URL and use it for destination file path + --header-filename Get the file name from the Content-Disposition header + -h, --help help for copyurl + --no-clobber Prevent overwriting file with same name + -p, --print-filename Print the resulting name from --auto-filename + --stdout Write the output to stdout rather than a file ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. @@ -2746,9 +2879,9 @@ Cryptcheck checks the integrity of a crypted remote. ## Synopsis -rclone cryptcheck checks a remote against a crypted remote. This is -the equivalent of running rclone check, but able to check the -checksums of the crypted remote. +rclone cryptcheck checks a remote against a [crypted](https://rclone.org/crypt/) remote. +This is the equivalent of running rclone [check](https://rclone.org/commands/rclone_check/), +but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. @@ -2824,7 +2957,7 @@ Cryptdecode returns unencrypted file names. rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. -If you supply the --reverse flag, it will return encrypted file names. +If you supply the `--reverse` flag, it will return encrypted file names. use it like this @@ -2832,8 +2965,8 @@ use it like this rclone cryptdecode --reverse encryptedremote: filename1 filename2 -Another way to accomplish this is by using the `rclone backend encode` (or `decode`)command. -See the documentation on the `crypt` overlay for more info. +Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command. +See the documentation on the [crypt](https://rclone.org/crypt/) overlay for more info. ``` @@ -2889,7 +3022,7 @@ Output completion script for a given shell. Generates a shell completion script for rclone. -Run with --help to list the supported shells. +Run with `--help` to list the supported shells. ## Options @@ -3073,6 +3206,9 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote. +For the MD5 and SHA1 algorithms there are also dedicated commands, +[md5sum](https://rclone.org/commands/rclone_md5sum/) and [sha1sum](https://rclone.org/commands/rclone_sha1sum/). + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, @@ -3088,6 +3224,7 @@ Run without a hash to see the list of all supported hashes, e.g. * crc32 * sha256 * dropbox + * hidrive * mailru * quickxor @@ -3174,7 +3311,7 @@ List all the remotes in the config file. rclone listremotes lists all the available remotes from the config file. -When uses with the -l flag it lists the types too. +When used with the `--long` flag it lists the types too. ``` @@ -3215,7 +3352,7 @@ Eg ferejej3gux/ fubuwic -Use the --format option to control what gets listed. By default this +Use the `--format` option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -3228,9 +3365,10 @@ output: m - MimeType of object if known e - encrypted name T - tier of storage if known, e.g. "Hot" or "Cool" + M - Metadata of object in JSON blob format, eg {"key":"value"} So if you wanted the path, size and modification time, you would use ---format "pst", or maybe --format "tsp" to put the path last. +`--format "pst"`, or maybe `--format "tsp"` to put the path last. Eg @@ -3242,7 +3380,7 @@ Eg 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, -use the "--hash" flag to change which hash you want. Note that this +use the `--hash` flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash @@ -3264,7 +3402,7 @@ Eg (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the ---separator flag. Note that separators aren't escaped in the path so +`--separator` flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg @@ -3286,8 +3424,8 @@ Eg test.sh,449 "this file contains a comma, in the file name.txt",6 -Note that the --absolute parameter is useful for making lists of files -to pass to an rclone copy with the --files-from-raw flag. +Note that the `--absolute` parameter is useful for making lists of files +to pass to an rclone copy with the `--files-from-raw` flag. For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): @@ -3354,7 +3492,7 @@ List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this - { + { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", @@ -3372,29 +3510,32 @@ The output is an array of Items, where each Item looks like this "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", - } + } -If --hash is not specified the Hashes property won't be emitted. The -types of hash can be specified with the --hash-type parameter (which -may be repeated). If --hash-type is set then it implies --hash. +If `--hash` is not specified the Hashes property won't be emitted. The +types of hash can be specified with the `--hash-type` parameter (which +may be repeated). If `--hash-type` is set then it implies `--hash`. -If --no-modtime is specified then ModTime will be blank. This can +If `--no-modtime` is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift). -If --no-mimetype is specified then MimeType will be blank. This can +If `--no-mimetype` is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift). -If --encrypted is not specified the Encrypted won't be emitted. +If `--encrypted` is not specified the Encrypted won't be emitted. -If --dirs-only is not specified files in addition to directories are +If `--dirs-only` is not specified files in addition to directories are returned -If --files-only is not specified directories in addition to the files +If `--files-only` is not specified directories in addition to the files will be returned. -if --stat is set then a single JSON blob will be returned about the +If `--metadata` is set then an additional Metadata key will be returned. +This will have metdata in rclone standard format as a JSON object. + +if `--stat` is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't @@ -3403,7 +3544,7 @@ possible to tell empty directories from missing directories there. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". -When used without --recursive the Path will always be the same as Name. +When used without `--recursive` the Path will always be the same as Name. If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is @@ -3451,7 +3592,7 @@ rclone lsjson remote:path [flags] ``` --dirs-only Show only directories in the listing - -M, --encrypted Show the encrypted names + --encrypted Show the encrypted names --files-only Show only files in the listing --hash Include hashes in the output (may take longer) --hash-type stringArray Show only this hash type (may be repeated) @@ -3539,10 +3680,10 @@ at all, then 1 PiB is set as both the total and the free size. To run rclone mount on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). -[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source +[WinFsp](https://github.com/winfsp/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone -uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). +uses combination with [cgofuse](https://github.com/winfsp/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. @@ -3692,7 +3833,7 @@ from Microsoft's Sysinternals suite, which has option `-s` to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the -[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)). +[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the [`--config`](https://rclone.org/docs/#config-config-file) option. @@ -3874,7 +4015,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -4031,6 +4172,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -4071,7 +4244,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -4083,7 +4256,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -4100,28 +4273,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -4169,7 +4349,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -4177,6 +4357,8 @@ rclone mount remote:path /path/to/mountpoint [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -4206,7 +4388,7 @@ directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly -like the move command. +like the [move](https://rclone.org/commands/rclone_move/) command. So @@ -4267,7 +4449,8 @@ builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. -Here are the keys - press '?' to toggle the help on and off +You can interact with the user interface using key presses, +press '?' to toggle the help on and off. The supported keys are: ↑,↓ or k,j to Move →,l to enter @@ -4278,19 +4461,41 @@ Here are the keys - press '?' to toggle the help on and off u toggle human-readable format n,s,C,A sort by name,size,count,average size d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories y copy current path to clipboard Y display current path - ^L refresh screen + ^L refresh screen (fix screen corruption) ? to toggle help on and off - q/ESC/c-C to quit + q/ESC/^c to quit + +Listed files/directories may be prefixed by a one-character flag, +some of them combined with a description in brackes at end of line. +These flags have the following meaning: + + e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) + ~ means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). + . means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) + ! means an error occurred while reading this directory This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. -Note that it might take some time to delete big files/folders. The +Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously. +For a non-interactive listing of the remote, see the +[tree](https://rclone.org/commands/rclone_tree/) command. To just get the total size of +the remote you can also use the [size](https://rclone.org/commands/rclone_size/) command. + ``` rclone ncdu remote:path [flags] @@ -4329,7 +4534,7 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. -echo "secretpassword" | rclone obscure - + echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. @@ -4362,26 +4567,26 @@ Run a command against a running rclone. -This runs a command against a running rclone. Use the --url flag to +This runs a command against a running rclone. Use the `--url` flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" -A username and password can be passed in with --user and --pass. +A username and password can be passed in with `--user` and `--pass`. -Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, ---user, --pass. +Note that `--rc-addr`, `--rc-user`, `--rc-pass` will be read also for +`--url`, `--user`, `--pass`. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. -The --json parameter can be used to pass in a JSON blob as an input +The `--json` parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. -The -o/--opt option can be used to set a key "opt" with key, value -options in the form "-o key=value" or "-o key". It can be repeated as +The `-o`/`--opt` option can be used to set a key "opt" with key, value +options in the form `-o key=value` or `-o key`. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. @@ -4392,7 +4597,7 @@ Will place this in the "opt" value {"key":"value", "key2","") -The -a/--arg option can be used to set strings in the "arg" value. It +The `-a`/`--arg` option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. @@ -4403,13 +4608,13 @@ Will place this in the "arg" value ["value", "value2"] -Use --loopback to connect to the rclone instance running "rclone rc". +Use `--loopback` to connect to the rclone instance running `rclone rc`. This is very useful for testing commands without having to run an rclone rc server, e.g.: rclone rc --loopback operations/about fs=/ -Use "rclone rc" to see a list of all possible commands. +Use `rclone rc` to see a list of all possible commands. ``` rclone rc commands parameter [flags] @@ -4460,11 +4665,11 @@ must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. -Use the |--size| flag to preallocate the file in advance at the remote end +Use the `--size` flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming. -|--size| should be the exact size of the input stream in bytes. If the -size of the stream is different in length to the |--size| passed in +`--size` should be the exact size of the input stream in bytes. If the +size of the stream is different in length to the `--size` passed in then the transfer will likely fail. Note that the upload can also not be retried because the data is @@ -4535,15 +4740,16 @@ that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the `--leave-root` flag. -Use command `rmdir` to delete just the empty directory -given by path, not recurse. +Use command [rmdir](https://rclone.org/commands/rclone_rmdir/) to delete just the empty +directory given by path, not recurse. This is useful for tidying up remotes that rclone has left a lot of -empty directories in. For example the `delete` command will -delete files but leave the directory structure (unless used with -option `--rmdirs`). +empty directories in. For example the [delete](https://rclone.org/commands/rclone_delete/) +command will delete files but leave the directory structure (unless +used with option `--rmdirs`). -To delete a path and any objects in it, use `purge` command. +To delete a path and any objects in it, use [purge](https://rclone.org/commands/rclone_purge/) +command. ``` @@ -4646,8 +4852,8 @@ Serve a remote over a protocol. ## Synopsis -rclone serve is used to serve a remote over a given protocol. This -command requires the use of a subcommand to specify the protocol, e.g. +Serve a remote over a given protocol. Requires the use of a +subcommand to specify the protocol, e.g. rclone serve http remote: @@ -4675,7 +4881,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone serve http](https://rclone.org/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](https://rclone.org/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP. -* [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over webdav. +* [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV. # rclone serve dlna @@ -4683,14 +4889,16 @@ Serve remote:path over DLNA ## Synopsis -rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many -devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN -and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast -packets (SSDP) and will thus only work on LANs. +Run a DLNA media server for media stored in an rclone remote. Many +devices, such as the Xbox and PlayStation, can automatically discover +this server in the LAN and play audio/video from it. VLC is also +supported. Service discovery uses UDP multicast packets (SSDP) and +will thus only work on LANs. -Rclone will list all files present in the remote, without filtering based on media formats or -file extensions. Additionally, there is no media transcoding support. This means that some -players might show files that they are not able to play back correctly. +Rclone will list all files present in the remote, without filtering +based on media formats or file extensions. Additionally, there is no +media transcoding support. This means that some players might show +files that they are not able to play back correctly. ## Server options @@ -4723,7 +4931,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -4880,6 +5088,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -4920,7 +5160,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -4932,7 +5172,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -4949,28 +5189,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5004,7 +5251,7 @@ rclone serve dlna remote:path [flags] --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -5012,6 +5259,8 @@ rclone serve dlna remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5091,7 +5340,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -5248,6 +5497,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -5288,7 +5569,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -5300,7 +5581,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -5317,28 +5598,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5389,7 +5677,7 @@ rclone serve docker [flags] --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --socket-addr string Address or absolute path (default: /run/docker/plugins/rclone.sock) --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) @@ -5399,6 +5687,8 @@ rclone serve docker [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5423,9 +5713,9 @@ Serve remote:path over FTP. ## Synopsis -rclone serve ftp implements a basic ftp server to serve the -remote over FTP protocol. This can be viewed with a ftp client -or you can make a remote of type ftp to read and write it. +Run a basic FTP server to serve a remote over FTP protocol. +This can be viewed with a FTP client or you can make a remote of +type FTP to read and write it. ## Server options @@ -5461,7 +5751,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -5618,6 +5908,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -5658,7 +5980,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -5670,7 +5992,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -5687,28 +6009,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5827,7 +6156,7 @@ rclone serve ftp remote:path [flags] --passive-port string Passive port range to use (default "30000-32000") --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) --public-ip string Public IP address to advertise for passive connections - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") @@ -5836,6 +6165,8 @@ rclone serve ftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5857,59 +6188,59 @@ Serve the remote over HTTP. ## Synopsis -rclone serve http implements a basic web server to serve the remote -over HTTP. This can be viewed in a web browser or you can make a -remote of type http read from it. +Run a basic web server to serve a remote over HTTP. +This can be viewed in a web browser or you can make a remote of type +http read from it. -You can use the filter flags (e.g. --include, --exclude) to control what +You can use the filter flags (e.g. `--include`, `--exclude`) to control what is served. -The server will log errors. Use -v to see access logs. +The server will log errors. Use `-v` to see access logs. ---bwlimit will be respected for file transfers. Use --stats to +`--bwlimit` will be respected for file transfers. Use `--stats` to control the stats printing. ## Server options -Use --addr to specify which IP address and port the server should -listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +Use `--addr` to specify which IP address and port the server should +listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ### SSL/TLS By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +https. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be a either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be a either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. ### Template ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -5936,9 +6267,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -5950,9 +6281,9 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. -Use --salt to change the password hashing salt from the default. +Use `--salt` to change the password hashing salt from the default. ## VFS - Virtual File System @@ -5972,7 +6303,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -6129,6 +6460,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -6169,7 +6532,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -6181,7 +6544,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -6198,28 +6561,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -6258,7 +6628,7 @@ rclone serve http remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication --salt string Password hashing salt (default "dlPL2MqE") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) @@ -6272,6 +6642,8 @@ rclone serve http remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -6293,8 +6665,8 @@ Serve the remote for restic's REST API. ## Synopsis -rclone serve restic implements restic's REST backend API -over HTTP. This allows restic to use rclone as a data storage +Run a basic web server to serve a remove over restic's REST backend +API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. [Restic](https://restic.net/) is a command-line program for doing @@ -6302,8 +6674,8 @@ backups. The server will log errors. Use -v to see access logs. ---bwlimit will be respected for file transfers. Use --stats to -control the stats printing. +`--bwlimit` will be respected for file transfers. +Use `--stats` to control the stats printing. ## Setting up rclone for use by restic ### @@ -6322,11 +6694,11 @@ Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this -with use of the "--addr" flag. +with use of the `--addr` flag. You might wish to start this server on boot. -Adding --cache-objects=false will cause rclone to stop caching objects +Adding `--cache-objects=false` will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory. @@ -6372,36 +6744,36 @@ these **must** end with /. Eg ### Private repositories #### -The "--private-repos" flag can be used to limit users to repositories starting +The`--private-repos` flag can be used to limit users to repositories starting with a path of `//`. ## Server options -Use --addr to specify which IP address and port the server should -listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. You can use port +Use `--addr` to specify which IP address and port the server should +listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to +listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -6428,9 +6800,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -6442,18 +6814,18 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. ### SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. @@ -6496,21 +6868,21 @@ Serve the remote over SFTP. ## Synopsis -rclone serve sftp implements an SFTP server to serve the remote -over SFTP. This can be used with an SFTP client or you can make a -remote of type sftp to use with it. +Run a SFTP server to serve a remote over SFTP. This can be used +with an SFTP client or you can make a remote of type sftp to use with it. -You can use the filter flags (e.g. --include, --exclude) to control what +You can use the filter flags (e.g. `--include`, `--exclude`) to control what is served. -The server will log errors. Use -v to see access logs. +The server will log errors. Use `-v` to see access logs. ---bwlimit will be respected for file transfers. Use --stats to -control the stats printing. +`--bwlimit` will be respected for file transfers. +Use `--stats` to control the stats printing. -You must provide some means of authentication, either with --user/--pass, -an authorized keys file (specify location with --authorized-keys - the -default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no +You must provide some means of authentication, either with +`--user`/`--pass`, an authorized keys file (specify location with +`--authorized-keys` - the default is the same as ssh), an +`--auth-proxy`, or set the `--no-auth` flag for no authentication when logging in. Note that this also implements a small number of shell commands so @@ -6518,30 +6890,30 @@ that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. -If you don't supply a host --key then rclone will generate rsa, ecdsa +If you don't supply a host `--key` then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache -directory (see "rclone help flags cache-dir") in the "serve-sftp" +directory (see `rclone help flags cache-dir`) in the "serve-sftp" directory. By default the server binds to localhost:2022 - if you want it to be -reachable externally then supply "--addr :2022" for example. +reachable externally then supply `--addr :2022` for example. -Note that the default of "--vfs-cache-mode off" is fine for the rclone +Note that the default of `--vfs-cache-mode off` is fine for the rclone sftp backend, but it may not be with other SFTP clients. -If --stdio is specified, rclone will serve SFTP over stdio, which can +If `--stdio` is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example: restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... -On the client you need to set "--transfers 1" when using --stdio. +On the client you need to set `--transfers 1` when using `--stdio`. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system. The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing -used. Omitting "restrict" and using --sftp-path-override to enable +used. Omitting "restrict" and using `--sftp-path-override` to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case. @@ -6564,7 +6936,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -6721,6 +7093,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -6761,7 +7165,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -6773,7 +7177,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -6790,28 +7194,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -6929,7 +7340,7 @@ rclone serve sftp remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --stdio Run an sftp server on run stdin/stdout --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) @@ -6939,6 +7350,8 @@ rclone serve sftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -6956,17 +7369,15 @@ See the [global flags page](https://rclone.org/flags/) for global options not li # rclone serve webdav -Serve remote:path over webdav. +Serve remote:path over WebDAV. ## Synopsis +Run a basic WebDAV server to serve a remote over HTTP via the +WebDAV protocol. This can be viewed with a WebDAV client, through a web +browser, or you can make a remote of type WebDAV to read and write it. -rclone serve webdav implements a basic webdav server to serve the -remote over HTTP via the webdav protocol. This can be viewed with a -webdav client, through a web browser, or you can make a remote of -type webdav to read and write it. - -## Webdav options +## WebDAV options ### --etag-hash @@ -6975,38 +7386,37 @@ based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as -"MD5" or "SHA-1". - -Use "rclone hashsum" to see the full list. +"MD5" or "SHA-1". Use the [hashsum](https://rclone.org/commands/rclone_hashsum/) command +to see the full list. ## Server options -Use --addr to specify which IP address and port the server should -listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. You can use port +Use `--addr` to specify which IP address and port the server should +listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to +listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -7033,9 +7443,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -7047,18 +7457,18 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. ### SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System @@ -7079,7 +7489,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -7236,6 +7646,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -7276,7 +7718,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -7288,7 +7730,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -7305,28 +7747,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -7449,7 +7898,7 @@ rclone serve webdav remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) @@ -7462,6 +7911,8 @@ rclone serve webdav remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -7555,6 +8006,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone test changenotify](https://rclone.org/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in. * [rclone test histogram](https://rclone.org/commands/rclone_test_histogram/) - Makes a histogram of file name characters. * [rclone test info](https://rclone.org/commands/rclone_test_info/) - Discovers file name or other limitations for paths. +* [rclone test makefile](https://rclone.org/commands/rclone_test_makefile/) - Make files with random contents of the size given * [rclone test makefiles](https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory * [rclone test memory](https://rclone.org/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats. @@ -7645,6 +8097,32 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command +# rclone test makefile + +Make files with random contents of the size given + +``` +rclone test makefile []+ [flags] +``` + +## Options + +``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + -h, --help help for makefile + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## SEE ALSO + +* [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + # rclone test makefiles Make a random file hierarchy in a directory @@ -7656,6 +8134,8 @@ rclone test makefiles [flags] ## Options ``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern --files int Number of files to create (default 1000) --files-per-directory int Average number of files per directory (default 10) -h, --help help for makefiles @@ -7663,7 +8143,10 @@ rclone test makefiles [flags] --max-name-length int Maximum size of file names (default 12) --min-file-size SizeSuffix Minimum size of file to create --min-name-length int Minimum size of file names (default 4) + --pattern Fill files with a periodic pattern --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 ``` See the [global flags page](https://rclone.org/flags/) for global options not listed here. @@ -7764,12 +8247,16 @@ For example 1 directories, 5 files You can use any of the filtering options with the tree command (e.g. ---include and --exclude). You can also use --fast-list. +`--include` and `--exclude`. You can also use `--fast-list`. The tree command has many options for controlling the listing which -are compatible with the tree command. Note that not all of them have +are compatible with the tree command, for example you can include file +sizes with `--size`. Note that not all of them have short options as they conflict with rclone's short options. +For a more interactive navigation of the remote see the +[ncdu](https://rclone.org/commands/rclone_ncdu/) command. + ``` rclone tree remote:path [flags] @@ -8092,6 +8579,126 @@ This can be used when scripting to make aged backups efficiently, e.g. rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup +## Metadata support {#metadata} + +Metadata is data about a file which isn't the contents of the file. +Normally rclone only preserves the modification time and the content +(MIME) type where possible. + +Rclone supports preserving all the available metadata on files (not +directories) when using the `--metadata` or `-M` flag. + +Exactly what metadata is supported and what that support means depends +on the backend. Backends that support metadata have a metadata section +in their docs and are listed in the [features table](https://rclone.org/overview/#features) +(Eg [local](https://rclone.org/local/#metadata), [s3](/s3/#metadata)) + +Rclone only supports a one-time sync of metadata. This means that +metadata will be synced from the source object to the destination +object only when the source object has changed and needs to be +re-uploaded. If the metadata subsequently changes on the source object +without changing the object itself then it won't be synced to the +destination object. This is in line with the way rclone syncs +`Content-Type` without the `--metadata` flag. + +Using `--metadata` when syncing from local to local will preserve file +attributes such as file mode, owner, extended attributes (not +Windows). + +Note that arbitrary metadata may be added to objects using the +`--metadata-set key=value` flag when the object is first uploaded. +This flag can be repeated as many times as necessary. + +### Types of metadata + +Metadata is divided into two type. System metadata and User metadata. + +Metadata which the backend uses itself is called system metadata. For +example on the local backend the system metadata `uid` will store the +user ID of the file when used on a unix based platform. + +Arbitrary metadata is called user metadata and this can be set however +is desired. + +When objects are copied from backend to backend, they will attempt to +interpret system metadata if it is supplied. Metadata may change from +being user metadata to system metadata as objects are copied between +different backends. For example copying an object from s3 sets the +`content-type` metadata. In a backend which understands this (like +`azureblob`) this will become the Content-Type of the object. In a +backend which doesn't understand this (like the `local` backend) this +will become user metadata. However should the local object be copied +back to s3, the Content-Type will be set correctly. + +### Metadata framework + +Rclone implements a metadata framework which can read metadata from an +object and write it to the object when (and only when) it is being +uploaded. + +This metadata is stored as a dictionary with string keys and string +values. + +There are some limits on the names of the keys (these may be clarified +further in the future). + +- must be lower case +- may be `a-z` `0-9` containing `.` `-` or `_` +- length is backend dependent + +Each backend can provide system metadata that it understands. Some +backends can also store arbitrary user metadata. + +Where possible the key names are standardized, so, for example, it is +possible to copy object metadata from s3 to azureblob for example and +metadata will be translated apropriately. + +Some backends have limits on the size of the metadata and rclone will +give errors on upload if they are exceeded. + +### Metadata preservation + +The goal of the implementation is to + +1. Preserve metadata if at all possible +2. Interpret metadata if at all possible + +The consequences of 1 is that you can copy an S3 object to a local +disk then back to S3 losslessly. Likewise you can copy a local file +with file attributes and xattrs from local disk to s3 and back again +losslessly. + +The consequence of 2 is that you can copy an S3 object with metadata +to Azureblob (say) and have the metadata appear on the Azureblob +object also. + +### Standard system metadata + +Here is a table of standard system metadata which, if appropriate, a +backend may implement. + +| key | description | example | +|---------------------|-------------|---------| +| mode | File type and mode: octal, unix style | 0100664 | +| uid | User ID of owner: decimal number | 500 | +| gid | Group ID of owner: decimal number | 500 | +| rdev | Device ID (if special file) => hexadecimal | 0 | +| atime | Time of last access: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | +| mtime | Time of last modification: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | +| btime | Time of file creation (birth): RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | +| cache-control | Cache-Control header | no-cache | +| content-disposition | Content-Disposition header | inline | +| content-encoding | Content-Encoding header | gzip | +| content-language | Content-Language header | en-US | +| content-type | Content-Type header | text/plain | + +The metadata keys `mtime` and `content-type` will take precedence if +supplied in the metadata over reading the `Content-Type` or +modification time of the source object. + +Hashes are not included in system metadata as there is a well defined +way of reading those already. + Options ------- @@ -8306,12 +8913,22 @@ objects to transfer is held in memory before the transfers start. ### --checkers=N ### -The number of checkers to run in parallel. Checkers do the equality -checking of files during a sync. For some storage systems (e.g. S3, -Swift, Dropbox) this can take a significant amount of time so they are -run in parallel. +Originally controlling just the number of file checkers to run in parallel, +e.g. by `rclone copy`. Now a fairly universal parallelism control +used by `rclone` in several places. -The default is to run 8 checkers in parallel. +Note: checkers do the equality checking of files during a sync. +For some storage systems (e.g. S3, Swift, Dropbox) this can take +a significant amount of time so they are run in parallel. + +The default is to run 8 checkers in parallel. However, in case +of slow-reacting backends you may need to lower (rather than increase) +this default by setting `--checkers` to 4 or less threads. This is +especially advised if you are experiencing backend server crashes +during file checking phase (e.g. on subsequent or top-up backups +where little or no file copying is done and checking takes up +most of the time). Increase this setting only with utmost care, +while monitoring your server health and file checking throughput. ### -c, --checksum ### @@ -8456,7 +9073,9 @@ See `--compare-dest` and `--backup-dir`. ### --dedupe-mode MODE ### -Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean. +Mode to run dedupe command in. One of `interactive`, `skip`, `first`, +`newest`, `oldest`, `rename`. The default is `interactive`. +See the dedupe command for more information as to what these options mean. ### --disable FEATURE,FEATURE,... ### @@ -8608,22 +9227,22 @@ unit prefix appended to the value (e.g. `9.762Ki`), while in more textual output the full unit is shown (e.g. `9.762 KiB`). For counts the SI standard notation is used, e.g. prefix `k` for kilo. Used with file counts, `1k` means 1000 files. -The various [list](commands/rclone_ls/) commands output raw numbers by default. +The various [list](https://rclone.org/commands/rclone_ls/) commands output raw numbers by default. Option `--human-readable` will make them output values in human-readable format instead (with the short unit prefix). -The [about](commands/rclone_about/) command outputs human-readable by default, +The [about](https://rclone.org/commands/rclone_about/) command outputs human-readable by default, with a command-specific option `--full` to output the raw numbers instead. -Command [size](commands/rclone_size/) outputs both human-readable and raw numbers +Command [size](https://rclone.org/commands/rclone_size/) outputs both human-readable and raw numbers in the same output. -The [tree](commands/rclone_tree/) command also considers `--human-readable`, but +The [tree](https://rclone.org/commands/rclone_tree/) command also considers `--human-readable`, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. `K` instead of `Ki`. The reason for this is that it relies on an external library. -The interactive command [ncdu](commands/rclone_ncdu/) shows human-readable by +The interactive command [ncdu](https://rclone.org/commands/rclone_ncdu/) shows human-readable by default, and responds to key `u` for toggling human-readable format. ### --ignore-case-sync ### @@ -8754,7 +9373,12 @@ have a signal to rotate logs. ### --log-format LIST ### -Comma separated list of log format options. Accepted options are `date`, `time`, `microseconds`, `pid`, `longfile`, `shortfile`, `UTC`. Any other keywords will be silently ignored. `pid` will tag log messages with process identifier which useful with `rclone mount --daemon`. Other accepted options are explained in the [go documentation](https://pkg.go.dev/log#pkg-constants). The default log format is "`date`,`time`". +Comma separated list of log format options. Accepted options are `date`, +`time`, `microseconds`, `pid`, `longfile`, `shortfile`, `UTC`. Any other +keywords will be silently ignored. `pid` will tag log messages with process +identifier which useful with `rclone mount --daemon`. Other accepted +options are explained in the [go documentation](https://pkg.go.dev/log#pkg-constants). +The default log format is "`date`,`time`". ### --log-level LEVEL ### @@ -8856,6 +9480,18 @@ When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. +## --metadata / -M + +Setting this flag enables rclone to copy the metadata from the source +to the destination. For local backends this is ownership, permissions, +xattr etc. See the [#metadata](metadata section) for more info. + +### --metadata-set key=value + +Add metadata `key` = `value` when uploading. This can be repeated as +many times as required. See the [#metadata](metadata section) for more +info. + ### --cutoff-mode=hard|soft|cautious ### This modifies the behavior of `--max-transfer` @@ -9462,6 +10098,8 @@ of timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. +Look at --multi-thread-streams if you would like to control single file transfers. + ### -u, --update ### This forces rclone to skip any files which exist on the destination @@ -9533,6 +10171,9 @@ With `-vv` rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. +When setting verbosity as an environment variable, use +`RCLONE_VERBOSE=1` or `RCLONE_VERBOSE=2` for `-v` and `-vv` respectively. + ### -V, --version ### Prints the version number @@ -9791,6 +10432,7 @@ For the filtering options * `--filter-from` * `--exclude` * `--exclude-from` + * `--exclude-if-present` * `--include` * `--include-from` * `--files-from` @@ -9898,6 +10540,10 @@ override the environment variable setting. Or to always use the trash in drive `--drive-use-trash`, set `RCLONE_DRIVE_USE_TRASH=true`. +Verbosity is slightly different, the environment variable +equivalent of `--verbose` or `-v` is `RCLONE_VERBOSE=1`, +or for `-vv`, `RCLONE_VERBOSE=2`. + The same parser is used for the options and the environment variables so they take exactly the same form. @@ -10079,6 +10725,27 @@ Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use `rclone config file` on the remote box to find out where). +## Configuring using SSH Tunnel ## + +Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command: +``` +ssh -L localhost:53682:localhost:53682 username@remote_server +``` +Then on the headless box run `rclone` config and answer `Y` to the `Use +auto config?` question. + +``` +... +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes (default) +n) No +y/n> y +``` +Then copy and paste the auth url `http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx` to the browser on your local machine, complete the auth and it is done. + # Filtering, includes and excludes Filter flags determine which files rclone `sync`, `move`, `ls`, `lsl`, @@ -10207,7 +10874,11 @@ The regular expressions used are as defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/). Regular expressions should be enclosed in `{{` `}}`. They will match only the last path segment if the glob doesn't start with `/` or the whole path -name if it does. +name if it does. Note that rclone does not attempt to parse the +supplied regular expression, meaning that using any regular expression +filter will prevent rclone from using [directory filter rules](#directory_filter), +as it will instead check every path against +the supplied regular expression(s). Here is how the `{{regexp}}` is transformed into an full regular expression to match the entire path: @@ -10323,7 +10994,7 @@ currently a means provided to pass regular expression filter options into rclone directly though character class filter rules contain character classes. [Go regular expression reference](https://golang.org/pkg/regexp/syntax/) -### How filter rules are applied to directories +### How filter rules are applied to directories {#directory_filter} Rclone commands are applied to path/file names not directories. The entire contents of a directory can be matched @@ -10339,10 +11010,14 @@ recurse into subdirectories. This potentially optimises access to a remote by avoiding listing unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content. +If any [regular expression filters](#regexp) are in use, then no +directory recursion optimisation is possible, as rclone must check +every path against the supplied regular expression(s). + Directory recursion optimisation occurs if either: * A source remote does not support the rclone `ListR` primitive. local, -sftp, Microsoft OneDrive and WebDav do not support `ListR`. Google +sftp, Microsoft OneDrive and WebDAV do not support `ListR`. Google Drive and most bucket type storage do. [Full list](https://rclone.org/overview/#optional-features) * On other remotes (those that support `ListR`), if the rclone command is not naturally recursive, and @@ -10818,7 +11493,9 @@ Useful for debugging. The `--exclude-if-present` flag controls whether a directory is within the scope of an rclone command based on the presence of a -named file within it. +named file within it. The flag can be repeated to check for +multiple file names, presence of any of them will exclude the +directory. This flag has a priority over other filter flags. @@ -10832,8 +11509,6 @@ E.g. for the following directory structure: The command `rclone ls --exclude-if-present .ignore dir1` does not list `dir3`, `file3` or `.ignore`. -`--exclude-if-present` can only be used once in an rclone command. - ## Common pitfalls The most frequent filter support issues on @@ -10952,10 +11627,10 @@ If you have questions then please ask them on the [rclone forum](https://forum.r If rclone is run with the `--rc` flag then it starts an HTTP server which can be used to remote control rclone using its API. -You can either use the [rclone rc](#api-rc) command to access the API +You can either use the [rc](#api-rc) command to access the API or [use HTTP directly](#api-http). -If you just want to run a remote control then see the [rcd command](https://rclone.org/commands/rclone_rcd/). +If you just want to run a remote control then see the [rcd](https://rclone.org/commands/rclone_rcd/) command. ## Supported parameters @@ -11094,6 +11769,16 @@ use these methods. The alternative is to use `--rc-user` and Default Off. +### --rc-baseurl + +Prefix for URLs. + +Default is root + +### --rc-template + +User-specified template. + ## Accessing the remote control via the rclone rc command {#api-rc} Rclone itself implements the remote control protocol in its `rclone @@ -11479,7 +12164,7 @@ This takes the following parameters: - result - result to restart with - used with continue -See the [config create command](https://rclone.org/commands/rclone_config_create/) command for more information on the above. +See the [config create](https://rclone.org/commands/rclone_config_create/) command for more information on the above. **Authentication is required for this call.** @@ -11489,7 +12174,7 @@ Parameters: - name - name of remote to delete -See the [config delete command](https://rclone.org/commands/rclone_config_delete/) command for more information on the above. +See the [config delete](https://rclone.org/commands/rclone_config_delete/) command for more information on the above. **Authentication is required for this call.** @@ -11500,7 +12185,7 @@ Returns a JSON object: Where keys are remote names and values are the config parameters. -See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. +See the [config dump](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** @@ -11510,7 +12195,7 @@ Parameters: - name - name of remote to get -See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. +See the [config dump](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. **Authentication is required for this call.** @@ -11519,7 +12204,7 @@ See the [config dump command](https://rclone.org/commands/rclone_config_dump/) c Returns - remotes - array of remote names -See the [listremotes command](https://rclone.org/commands/rclone_listremotes/) command for more information on the above. +See the [listremotes](https://rclone.org/commands/rclone_listremotes/) command for more information on the above. **Authentication is required for this call.** @@ -11531,7 +12216,7 @@ This takes the following parameters: - parameters - a map of \{ "key": "value" \} pairs -See the [config password command](https://rclone.org/commands/rclone_config_password/) command for more information on the above. +See the [config password](https://rclone.org/commands/rclone_config_password/) command for more information on the above. **Authentication is required for this call.** @@ -11540,7 +12225,7 @@ See the [config password command](https://rclone.org/commands/rclone_config_pass Returns a JSON object: - providers - array of objects -See the [config providers command](https://rclone.org/commands/rclone_config_providers/) command for more information on the above. +See the [config providers](https://rclone.org/commands/rclone_config_providers/) command for more information on the above. **Authentication is required for this call.** @@ -11560,7 +12245,7 @@ This takes the following parameters: - result - result to restart with - used with continue -See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above. +See the [config update](https://rclone.org/commands/rclone_config_update/) command for more information on the above. **Authentication is required for this call.** @@ -12006,7 +12691,7 @@ This takes the following parameters: The result is as returned from rclone about --json -See the [about command](https://rclone.org/commands/rclone_size/) command for more information on the above. +See the [about](https://rclone.org/commands/rclone_about/) command for more information on the above. **Authentication is required for this call.** @@ -12016,7 +12701,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" -See the [cleanup command](https://rclone.org/commands/rclone_cleanup/) command for more information on the above. +See the [cleanup](https://rclone.org/commands/rclone_cleanup/) command for more information on the above. **Authentication is required for this call.** @@ -12039,7 +12724,8 @@ This takes the following parameters: - remote - a path within that remote e.g. "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file name from url -See the [copyurl command](https://rclone.org/commands/rclone_copyurl/) command for more information on the above. + +See the [copyurl](https://rclone.org/commands/rclone_copyurl/) command for more information on the above. **Authentication is required for this call.** @@ -12049,7 +12735,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" -See the [delete command](https://rclone.org/commands/rclone_delete/) command for more information on the above. +See the [delete](https://rclone.org/commands/rclone_delete/) command for more information on the above. **Authentication is required for this call.** @@ -12060,7 +12746,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [deletefile command](https://rclone.org/commands/rclone_deletefile/) command for more information on the above. +See the [deletefile](https://rclone.org/commands/rclone_deletefile/) command for more information on the above. **Authentication is required for this call.** @@ -12074,46 +12760,103 @@ This returns info about the remote passed in; ``` { - // optional features and whether they are available or not - "Features": { - "About": true, - "BucketBased": false, - "CanHaveEmptyDirectories": true, - "CaseInsensitive": false, - "ChangeNotify": false, - "CleanUp": false, - "Copy": false, - "DirCacheFlush": false, - "DirMove": true, - "DuplicateFiles": false, - "GetTier": false, - "ListR": false, - "MergeDirs": false, - "Move": true, - "OpenWriterAt": true, - "PublicLink": false, - "Purge": true, - "PutStream": true, - "PutUnchecked": false, - "ReadMimeType": false, - "ServerSideAcrossConfigs": false, - "SetTier": false, - "SetWrapper": false, - "UnWrap": false, - "WrapFs": false, - "WriteMimeType": false - }, - // Names of hashes available - "Hashes": [ - "MD5", - "SHA-1", - "DropboxHash", - "QuickXorHash" - ], - "Name": "local", // Name as created - "Precision": 1, // Precision of timestamps in ns - "Root": "/", // Path as created - "String": "Local file system at /" // how the remote will appear in logs + // optional features and whether they are available or not + "Features": { + "About": true, + "BucketBased": false, + "BucketBasedRootOK": false, + "CanHaveEmptyDirectories": true, + "CaseInsensitive": false, + "ChangeNotify": false, + "CleanUp": false, + "Command": true, + "Copy": false, + "DirCacheFlush": false, + "DirMove": true, + "Disconnect": false, + "DuplicateFiles": false, + "GetTier": false, + "IsLocal": true, + "ListR": false, + "MergeDirs": false, + "MetadataInfo": true, + "Move": true, + "OpenWriterAt": true, + "PublicLink": false, + "Purge": true, + "PutStream": true, + "PutUnchecked": false, + "ReadMetadata": true, + "ReadMimeType": false, + "ServerSideAcrossConfigs": false, + "SetTier": false, + "SetWrapper": false, + "Shutdown": false, + "SlowHash": true, + "SlowModTime": false, + "UnWrap": false, + "UserInfo": false, + "UserMetadata": true, + "WrapFs": false, + "WriteMetadata": true, + "WriteMimeType": false + }, + // Names of hashes available + "Hashes": [ + "md5", + "sha1", + "whirlpool", + "crc32", + "sha256", + "dropbox", + "mailru", + "quickxor" + ], + "Name": "local", // Name as created + "Precision": 1, // Precision of timestamps in ns + "Root": "/", // Path as created + "String": "Local file system at /", // how the remote will appear in logs + // Information about the system metadata for this backend + "MetadataInfo": { + "System": { + "atime": { + "Help": "Time of last access", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "btime": { + "Help": "Time of file birth (creation)", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "gid": { + "Help": "Group ID of owner", + "Type": "decimal number", + "Example": "500" + }, + "mode": { + "Help": "File type and mode", + "Type": "octal, unix style", + "Example": "0100664" + }, + "mtime": { + "Help": "Time of last modification", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "rdev": { + "Help": "Device ID (if special file)", + "Type": "hexadecimal", + "Example": "1abc" + }, + "uid": { + "Help": "User ID of owner", + "Type": "decimal number", + "Example": "500" + } + }, + "Help": "Textual help string\n" + } } ``` @@ -12136,6 +12879,7 @@ This takes the following parameters: - noMimeType - If set don't show mime types - dirsOnly - If set only show directories - filesOnly - If set only show files + - metadata - If set return metadata of objects also - hashTypes - array of strings of hash types to show if showHash set Returns: @@ -12143,7 +12887,7 @@ Returns: - list - This is an array of objects as described in the lsjson command -See the [lsjson command](https://rclone.org/commands/rclone_lsjson/) for more information on the above and examples. +See the [lsjson](https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples. **Authentication is required for this call.** @@ -12154,7 +12898,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [mkdir command](https://rclone.org/commands/rclone_mkdir/) command for more information on the above. +See the [mkdir](https://rclone.org/commands/rclone_mkdir/) command for more information on the above. **Authentication is required for this call.** @@ -12182,7 +12926,7 @@ Returns: - url - URL of the resource -See the [link command](https://rclone.org/commands/rclone_link/) command for more information on the above. +See the [link](https://rclone.org/commands/rclone_link/) command for more information on the above. **Authentication is required for this call.** @@ -12193,7 +12937,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [purge command](https://rclone.org/commands/rclone_purge/) command for more information on the above. +See the [purge](https://rclone.org/commands/rclone_purge/) command for more information on the above. **Authentication is required for this call.** @@ -12204,7 +12948,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [rmdir command](https://rclone.org/commands/rclone_rmdir/) command for more information on the above. +See the [rmdir](https://rclone.org/commands/rclone_rmdir/) command for more information on the above. **Authentication is required for this call.** @@ -12215,7 +12959,8 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" - leaveRoot - boolean, set to true not to delete the root -See the [rmdirs command](https://rclone.org/commands/rclone_rmdirs/) command for more information on the above. + +See the [rmdirs](https://rclone.org/commands/rclone_rmdirs/) command for more information on the above. **Authentication is required for this call.** @@ -12230,7 +12975,7 @@ Returns: - count - number of files - bytes - number of bytes in those files -See the [size command](https://rclone.org/commands/rclone_size/) command for more information on the above. +See the [size](https://rclone.org/commands/rclone_size/) command for more information on the above. **Authentication is required for this call.** @@ -12250,7 +12995,7 @@ The result is Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options. -See the [lsjson command](https://rclone.org/commands/rclone_lsjson/) for more information on the above and examples. +See the [lsjson](https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples. **Authentication is required for this call.** @@ -12261,7 +13006,8 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" - each part in body represents a file to be uploaded -See the [uploadfile command](https://rclone.org/commands/rclone_uploadfile/) command for more information on the above. + +See the [uploadfile](https://rclone.org/commands/rclone_uploadfile/) command for more information on the above. **Authentication is required for this call.** @@ -12476,7 +13222,7 @@ This takes the following parameters: - createEmptySrcDirs - create empty src directories on destination if set -See the [copy command](https://rclone.org/commands/rclone_copy/) command for more information on the above. +See the [copy](https://rclone.org/commands/rclone_copy/) command for more information on the above. **Authentication is required for this call.** @@ -12490,7 +13236,7 @@ This takes the following parameters: - deleteEmptySrcDirs - delete empty src directories if set -See the [move command](https://rclone.org/commands/rclone_move/) command for more information on the above. +See the [move](https://rclone.org/commands/rclone_move/) command for more information on the above. **Authentication is required for this call.** @@ -12503,7 +13249,7 @@ This takes the following parameters: - createEmptySrcDirs - create empty src directories on destination if set -See the [sync command](https://rclone.org/commands/rclone_sync/) command for more information on the above. +See the [sync](https://rclone.org/commands/rclone_sync/) command for more information on the above. **Authentication is required for this call.** @@ -12854,47 +13600,49 @@ show through. Here is an overview of the major features of each cloud storage system. -| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | -| ---------------------------- |:-----------:|:-------:|:----------------:|:---------------:|:---------:| -| 1Fichier | Whirlpool | No | No | Yes | R | -| Akamai Netstorage | MD5, SHA256 | Yes | No | No | R | -| Amazon Drive | MD5 | No | Yes | No | R | -| Amazon S3 (or S3 compatible) | MD5 | Yes | No | No | R/W | -| Backblaze B2 | SHA1 | Yes | No | No | R/W | -| Box | SHA1 | Yes | Yes | No | - | -| Citrix ShareFile | MD5 | Yes | Yes | No | - | -| Dropbox | DBHASH ¹ | Yes | Yes | No | - | -| Enterprise File Fabric | - | Yes | Yes | No | R/W | -| FTP | - | No | No | No | - | -| Google Cloud Storage | MD5 | Yes | No | No | R/W | -| Google Drive | MD5 | Yes | No | Yes | R/W | -| Google Photos | - | No | No | Yes | R | -| HDFS | - | Yes | No | No | - | -| HTTP | - | No | No | No | R | -| Hubic | MD5 | Yes | No | No | R/W | -| Jottacloud | MD5 | Yes | Yes | No | R | -| Koofr | MD5 | No | Yes | No | - | -| Mail.ru Cloud | Mailru ⁶ | Yes | Yes | No | - | -| Mega | - | No | No | Yes | - | -| Memory | MD5 | Yes | No | No | - | -| Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | -| Microsoft OneDrive | SHA1 ⁵ | Yes | Yes | No | R | -| OpenDrive | MD5 | Yes | Yes | Partial ⁸ | - | -| OpenStack Swift | MD5 | Yes | No | No | R/W | -| pCloud | MD5, SHA1 ⁷ | Yes | No | No | W | -| premiumize.me | - | No | Yes | No | R | -| put.io | CRC-32 | Yes | No | Yes | R | -| QingStor | MD5 | No | No | No | R/W | -| Seafile | - | No | No | No | - | -| SFTP | MD5, SHA1 ² | Yes | Depends | No | - | -| Sia | - | No | No | No | - | -| SugarSync | - | No | No | No | - | -| Storj | - | Yes | No | No | - | -| Uptobox | - | No | No | Yes | - | -| WebDAV | MD5, SHA1 ³ | Yes ⁴ | Depends | No | - | -| Yandex Disk | MD5 | Yes | No | No | R | -| Zoho WorkDrive | - | No | No | No | - | -| The local filesystem | All | Yes | Depends | No | - | +| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | Metadata | +| ---------------------------- |:----------------:|:-------:|:----------------:|:---------------:|:---------:|:--------:| +| 1Fichier | Whirlpool | - | No | Yes | R | - | +| Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - | +| Amazon Drive | MD5 | - | Yes | No | R | - | +| Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU | +| Backblaze B2 | SHA1 | R/W | No | No | R/W | - | +| Box | SHA1 | R/W | Yes | No | - | - | +| Citrix ShareFile | MD5 | R/W | Yes | No | - | - | +| Dropbox | DBHASH ¹ | R | Yes | No | - | - | +| Enterprise File Fabric | - | R/W | Yes | No | R/W | - | +| FTP | - | R/W ¹⁰ | No | No | - | - | +| Google Cloud Storage | MD5 | R/W | No | No | R/W | - | +| Google Drive | MD5 | R/W | No | Yes | R/W | - | +| Google Photos | - | - | No | Yes | R | - | +| HDFS | - | R/W | No | No | - | - | +| HiDrive | HiDrive ¹² | R/W | No | No | - | - | +| HTTP | - | R | No | No | R | - | +| Hubic | MD5 | R/W | No | No | R/W | - | +| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU | +| Jottacloud | MD5 | R/W | Yes | No | R | - | +| Koofr | MD5 | - | Yes | No | - | - | +| Mail.ru Cloud | Mailru ⁶ | R/W | Yes | No | - | - | +| Mega | - | - | No | Yes | - | - | +| Memory | MD5 | R/W | No | No | - | - | +| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - | +| Microsoft OneDrive | SHA1 ⁵ | R/W | Yes | No | R | - | +| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - | +| OpenStack Swift | MD5 | R/W | No | No | R/W | - | +| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - | +| premiumize.me | - | - | Yes | No | R | - | +| put.io | CRC-32 | R/W | No | Yes | R | - | +| QingStor | MD5 | - ⁹ | No | No | R/W | - | +| Seafile | - | - | No | No | - | - | +| SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - | +| Sia | - | - | No | No | - | - | +| SugarSync | - | - | No | No | - | - | +| Storj | - | R | No | No | - | - | +| Uptobox | - | - | No | Yes | - | - | +| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - | +| Yandex Disk | MD5 | R/W | No | No | R | - | +| Zoho WorkDrive | - | - | No | No | - | - | +| The local filesystem | All | R/W | Depends | No | - | RWU | ### Notes @@ -12923,6 +13671,20 @@ storage platform has been determined to allow duplicate files, and it is possible to create them with `rclone`. It may be that this is a mistake or an unsupported feature. +⁹ QingStor does not support SetModTime for objects bigger than 5 GiB. + +¹⁰ FTP supports modtimes for the major FTP servers, and also others +if they advertised required protocol extensions. See [this](https://rclone.org/ftp/#modified-time) +for more details. + +¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value +for full modtime support. + +¹² HiDrive supports [its own custom +hash](https://static.hidrive.com/dev/0001). +It combines SHA1 sums for each 4 KiB block hierarchically to a single +top-level sum. + ### Hash ### The cloud storage system supports various hash types of the objects. @@ -12935,13 +13697,36 @@ systems they must support a common hash type. ### ModTime ### -The cloud storage system supports setting modification times on -objects. If it does then this enables a using the modification times -as part of the sync. If not then only the size will be checked by -default, though the MD5SUM can be checked with the `--checksum` flag. +Allmost all cloud storage systems store some sort of timestamp +on objects, but several of them not something that is appropriate +to use for syncing. E.g. some backends will only write a timestamp +that represent the time of the upload. To be relevant for syncing +it should be able to store the modification time of the source +object. If this is not the case, rclone will only check the file +size by default, though can be configured to check the file hash +(with the `--checksum` flag). Ideally it should also be possible to +change the timestamp of an existing file without having to re-upload it. -All cloud storage systems support some kind of date on the object and -these will be set when transferring from the cloud storage system. +Storage systems with a `-` in the ModTime column, means the +modification read on objects is not the modification time of the +file when uploaded. It is most likely the time the file was uploaded, +or possibly something else (like the time the picture was taken in +Google Photos). + +Storage systems with a `R` (for read-only) in the ModTime column, +means the it keeps modification times on objects, and updates them +when uploading objects, but it does not support changing only the +modification time (`SetModTime` operation) without re-uploading, +possibly not even without deleting existing first. Some operations +in rclone, such as `copy` and `sync` commands, will automatically +check for `SetModTime` support and re-upload if necessary to keep +the modification times in sync. Other commands will not work +without `SetModTime` support, e.g. `touch` command on an existing +file will fail, and changes to modification time only on a files +in a `mount` will be silently ignored. + +Storage systems with `R/W` (for read/write) in the ModTime column, +means they do also support modtime-only operations. ### Case Insensitive ### @@ -13141,35 +13926,36 @@ list of all possible values by passing an invalid value to this flag, e.g. `--local-encoding "help"`. The command `rclone help flags encoding` will show you the defaults for the backends. -| Encoding | Characters | -| --------- | ---------- | -| Asterisk | `*` | -| BackQuote | `` ` `` | -| BackSlash | `\` | -| Colon | `:` | -| CrLf | CR 0x0D, LF 0x0A | -| Ctl | All control characters 0x00-0x1F | -| Del | DEL 0x7F | -| Dollar | `$` | -| Dot | `.` or `..` as entire string | -| DoubleQuote | `"` | -| Hash | `#` | -| InvalidUtf8 | An invalid UTF-8 character (e.g. latin1) | -| LeftCrLfHtVt | CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string | -| LeftPeriod | `.` on the left of a string | -| LeftSpace | SPACE on the left of a string | -| LeftTilde | `~` on the left of a string | -| LtGt | `<`, `>` | -| None | No characters are encoded | -| Percent | `%` | -| Pipe | \| | -| Question | `?` | -| RightCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string | -| RightPeriod | `.` on the right of a string | -| RightSpace | SPACE on the right of a string | -| SingleQuote | `'` | -| Slash | `/` | -| SquareBracket | `[`, `]` | +| Encoding | Characters | Encoded as | +| --------- | ---------- | ---------- | +| Asterisk | `*` | `*` | +| BackQuote | `` ` `` | ``` | +| BackSlash | `\` | `\` | +| Colon | `:` | `:` | +| CrLf | CR 0x0D, LF 0x0A | `␍`, `␊` | +| Ctl | All control characters 0x00-0x1F | `␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟` | +| Del | DEL 0x7F | `␡` | +| Dollar | `$` | `$` | +| Dot | `.` or `..` as entire string | `.`, `..` | +| DoubleQuote | `"` | `"` | +| Hash | `#` | `#` | +| InvalidUtf8 | An invalid UTF-8 character (e.g. latin1) | `�` | +| LeftCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the left of a string | `␍`, `␊`, `␉`, `␋` | +| LeftPeriod | `.` on the left of a string | `.` | +| LeftSpace | SPACE on the left of a string | `␠` | +| LeftTilde | `~` on the left of a string | `~` | +| LtGt | `<`, `>` | `<`, `>` | +| None | No characters are encoded | | +| Percent | `%` | `%` | +| Pipe | \| | `|` | +| Question | `?` | `?` | +| RightCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string | `␍`, `␊`, `␉`, `␋` | +| RightPeriod | `.` on the right of a string | `.` | +| RightSpace | SPACE on the right of a string | `␠` | +| Semicolon | `;` | `;` | +| SingleQuote | `'` | `'` | +| Slash | `/` | `/` | +| SquareBracket | `[`, `]` | `[`, `]` | ##### Encoding example: FTP @@ -13244,6 +14030,22 @@ remote which supports writing (`W`) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. +### Metadata + +Backends may or may support reading or writing metadata. They may +support reading and writing system metadata (metadata intrinsic to +that backend) and/or user metadata (general purpose metadata). + +The levels of metadata support are + +| Key | Explanation | +|-----|-------------| +| `R` | Read only System Metadata | +| `RW` | Read and write System Metadata | +| `RWU` | Read and write System Metadata and read and write User Metadata | + +See [the metadata docs](https://rclone.org/docs/#metadata) for more info. + ## Optional Features ## All rclone remotes support a base command set. Other features depend @@ -13252,8 +14054,9 @@ upon backend-specific capabilities. | Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:|:--------:| | 1Fichier | No | Yes | Yes | No | No | No | No | Yes | No | Yes | +| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | Yes | | Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | Yes | -| Amazon S3 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | +| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | | Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | | Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | Yes | Yes | Yes | | Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes | @@ -13264,9 +14067,12 @@ upon backend-specific capabilities. | Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Google Photos | No | No | No | No | No | No | No | No | No | No | | HDFS | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | +| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes | | HTTP | No | No | No | No | No | No | No | No | No | Yes | | Hubic | Yes † | Yes | No | No | No | Yes | Yes | No | Yes | No | +| Internet Archive | No | Yes | No | No | Yes | Yes | No | Yes | Yes | No | | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | +| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | Yes | | Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mega | Yes | No | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | @@ -13280,6 +14086,7 @@ upon backend-specific capabilities. | QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | | Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | SFTP | No | No | Yes | Yes | No | No | Yes | No | Yes | Yes | +| Sia | No | No | No | No | No | No | Yes | No | No | Yes | | SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | | Storj | Yes † | No | Yes | No | No | Yes | Yes | No | No | No | | Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | @@ -13407,6 +14214,7 @@ These flags are available for every command. --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -13416,7 +14224,7 @@ These flags are available for every command. --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) - --exclude-if-present string Exclude directories if filename is present + --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --files-from stringArray Read list of source-file names from file (use - to read from stdin) @@ -13455,6 +14263,8 @@ These flags are available for every command. --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -13526,7 +14336,7 @@ These flags are available for every command. --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -13581,6 +14391,7 @@ and may be set in the config file. --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) + --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL @@ -13620,6 +14431,7 @@ and may be set in the config file. --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-upstreams SpaceSepList Upstreams for combining --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) @@ -13652,6 +14464,7 @@ and may be set in the config file. --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) + --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs @@ -13707,6 +14520,7 @@ and may be set in the config file. --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-host string FTP host to connect to @@ -13725,8 +14539,10 @@ and may be set in the config file. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret + --gcs-decompress If set this will decompress gzip encoded objects --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets + --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path @@ -13752,10 +14568,24 @@ and may be set in the config file. --hdfs-namenode string Hadoop name node and port --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name + --hidrive-auth-url string Auth server URL + --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) + --hidrive-client-id string OAuth Client Id + --hidrive-client-secret string OAuth Client Secret + --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary + --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) + --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") + --hidrive-root-prefix string The root/parent folder for all paths (default "/") + --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") + --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user") + --hidrive-token string OAuth Access Token as a JSON blob + --hidrive-token-url string Token server url + --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) + --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / - --http-url string URL of http host to connect to + --http-url string URL of HTTP host to connect to --hubic-auth-url string Auth server URL --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --hubic-client-id string OAuth Client Id @@ -13764,6 +14594,13 @@ and may be set in the config file. --hubic-no-chunk Don't chunk files during streaming upload --hubic-token string OAuth Access Token as a JSON blob --hubic-token-url string Token server url + --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) + --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) + --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") + --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") + --internetarchive-secret-access-key string IAS3 Secret Key (password) + --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -13785,7 +14622,7 @@ and may be set in the config file. --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads - --local-nounc string Disable UNC (long path names) conversion on Windows + --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) @@ -13806,11 +14643,11 @@ and may be set in the config file. --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) + --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret - --onedrive-disable-site-permission Disable the request for Sites.Read.All permission --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -13834,9 +14671,11 @@ and may be set in the config file. --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") + --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url + --pcloud-username string Your pcloud username --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID @@ -13889,6 +14728,7 @@ and may be set in the config file. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) + --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist @@ -13899,6 +14739,8 @@ and may be set in the config file. --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed + --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) + --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -13911,12 +14753,14 @@ and may be set in the config file. --sftp-known-hosts-file string Optional path to known_hosts file --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) - --sftp-path-override string Override path used by SSH connection + --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host + --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes + --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat @@ -13973,6 +14817,7 @@ and may be set in the config file. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token @@ -13984,7 +14829,7 @@ and may be set in the config file. --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using + --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret @@ -14660,7 +15505,7 @@ Optional Flags: Arbitrary rclone flags may be specified on the [bisync command line](https://rclone.org/commands/rclone_bisync/), for example -`rclone bsync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s` +`rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s` Note that interactions of various rclone flags with bisync process flow has not been fully tested yet. @@ -14917,6 +15762,7 @@ Bisync is considered _BETA_ and has been tested with the following backends: - OneDrive - S3 - SFTP +- Yandex Disk It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. @@ -15254,7 +16100,7 @@ consider using the flag Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to -a normal file (with `.xlsx` extension, for example), it's not possible +a normal file (with `.xlsx` extension, for example), it is not possible to import a normal file back into a Google document. Bisync's handling of Google Doc files is to flag them in the run log output @@ -15755,7 +16601,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to fichier (1Fichier). +Here are the Standard options specific to fichier (1Fichier). #### --fichier-api-key @@ -15770,7 +16616,7 @@ Properties: ### Advanced options -Here are the advanced options specific to fichier (1Fichier). +Here are the Advanced options specific to fichier (1Fichier). #### --fichier-shared-folder @@ -15831,8 +16677,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Alias @@ -15922,7 +16767,7 @@ Copy another local directory to the alias directory called source ### Standard options -Here are the standard options specific to alias (Alias for an existing remote). +Here are the Standard options specific to alias (Alias for an existing remote). #### --alias-remote @@ -16096,7 +16941,7 @@ rclone it will take you to an `amazon.com` page to log in. Your ### Standard options -Here are the standard options specific to amazon cloud drive (Amazon Drive). +Here are the Standard options specific to amazon cloud drive (Amazon Drive). #### --acd-client-id @@ -16126,7 +16971,7 @@ Properties: ### Advanced options -Here are the advanced options specific to amazon cloud drive (Amazon Drive). +Here are the Advanced options specific to amazon cloud drive (Amazon Drive). #### --acd-token @@ -16270,8 +17115,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Amazon S3 Storage Providers @@ -16281,9 +17125,14 @@ The S3 backend can be used with a number of different providers: - AWS S3 - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Ceph +- China Mobile Ecloud Elastic Object Storage (EOS) +- Cloudflare R2 +- Arvan Cloud Object Storage (AOS) - DigitalOcean Spaces - Dreamhost +- Huawei OBS - IBM COS S3 +- IDrive e2 - Minio - RackCorp Object Storage - Scaleway @@ -16339,7 +17188,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, Dreamhost, IBM COS, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -16836,7 +17685,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t ### Standard options -Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi). #### --s3-provider @@ -16855,12 +17704,22 @@ Properties: - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage + - "ChinaMobile" + - China Mobile Ecloud Elastic Object Storage (EOS) + - "Cloudflare" + - Cloudflare R2 Storage + - "ArvanCloud" + - Arvan Cloud Object Storage (AOS) - "DigitalOcean" - Digital Ocean Spaces - "Dreamhost" - Dreamhost DreamObjects + - "HuaweiOBS" + - Huawei Object Storage Service - "IBMCOS" - IBM COS S3 + - "IDrive" + - IDrive e2 - "LyveCloud" - Seagate Lyve Cloud - "Minio" @@ -17085,6 +17944,67 @@ Properties: - Amsterdam, The Netherlands - "fr-par" - Paris, France + - "pl-waw" + - Warsaw, Poland + +#### --s3-region + +Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint. + + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "af-south-1" + - AF-Johannesburg + - "ap-southeast-2" + - AP-Bangkok + - "ap-southeast-3" + - AP-Singapore + - "cn-east-3" + - CN East-Shanghai1 + - "cn-east-2" + - CN East-Shanghai2 + - "cn-north-1" + - CN North-Beijing1 + - "cn-north-4" + - CN North-Beijing4 + - "cn-south-1" + - CN South-Guangzhou + - "ap-southeast-1" + - CN-Hong Kong + - "sa-argentina-1" + - LA-Buenos Aires1 + - "sa-peru-1" + - LA-Lima1 + - "na-mexico-1" + - LA-Mexico City1 + - "sa-chile-1" + - LA-Santiago2 + - "sa-brazil-1" + - LA-Sao Paulo1 + - "ru-northwest-2" + - RU-Moscow2 + +#### --s3-region + +Region to connect to. + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Cloudflare +- Type: string +- Required: false +- Examples: + - "auto" + - R2 buckets are automatically distributed across Cloudflare's data centers for low latency. #### --s3-region @@ -17096,7 +18016,7 @@ Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS +- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive - Type: string - Required: false - Examples: @@ -17123,6 +18043,98 @@ Properties: #### --s3-endpoint +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "eos-wuxi-1.cmecloud.cn" + - The default endpoint - a good choice if you are unsure. + - East China (Suzhou) + - "eos-jinan-1.cmecloud.cn" + - East China (Jinan) + - "eos-ningbo-1.cmecloud.cn" + - East China (Hangzhou) + - "eos-shanghai-1.cmecloud.cn" + - East China (Shanghai-1) + - "eos-zhengzhou-1.cmecloud.cn" + - Central China (Zhengzhou) + - "eos-hunan-1.cmecloud.cn" + - Central China (Changsha-1) + - "eos-zhuzhou-1.cmecloud.cn" + - Central China (Changsha-2) + - "eos-guangzhou-1.cmecloud.cn" + - South China (Guangzhou-2) + - "eos-dongguan-1.cmecloud.cn" + - South China (Guangzhou-3) + - "eos-beijing-1.cmecloud.cn" + - North China (Beijing-1) + - "eos-beijing-2.cmecloud.cn" + - North China (Beijing-2) + - "eos-beijing-4.cmecloud.cn" + - North China (Beijing-3) + - "eos-huhehaote-1.cmecloud.cn" + - North China (Huhehaote) + - "eos-chengdu-1.cmecloud.cn" + - Southwest China (Chengdu) + - "eos-chongqing-1.cmecloud.cn" + - Southwest China (Chongqing) + - "eos-guiyang-1.cmecloud.cn" + - Southwest China (Guiyang) + - "eos-xian-1.cmecloud.cn" + - Nouthwest China (Xian) + - "eos-yunnan.cmecloud.cn" + - Yunnan China (Kunming) + - "eos-yunnan-2.cmecloud.cn" + - Yunnan China (Kunming-2) + - "eos-tianjin-1.cmecloud.cn" + - Tianjin China (Tianjin) + - "eos-jilin-1.cmecloud.cn" + - Jilin China (Changchun) + - "eos-hubei-1.cmecloud.cn" + - Hubei China (Xiangyan) + - "eos-jiangxi-1.cmecloud.cn" + - Jiangxi China (Nanchang) + - "eos-gansu-1.cmecloud.cn" + - Gansu China (Lanzhou) + - "eos-shanxi-1.cmecloud.cn" + - Shanxi China (Taiyuan) + - "eos-liaoning-1.cmecloud.cn" + - Liaoning China (Shenyang) + - "eos-hebei-1.cmecloud.cn" + - Hebei China (Shijiazhuang) + - "eos-fujian-1.cmecloud.cn" + - Fujian China (Xiamen) + - "eos-guangxi-1.cmecloud.cn" + - Guangxi China (Nanning) + - "eos-anhui-1.cmecloud.cn" + - Anhui China (Huainan) + +#### --s3-endpoint + +Endpoint for Arvan Cloud Object Storage (AOS) API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "s3.ir-thr-at1.arvanstorage.com" + - The default endpoint - a good choice if you are unsure. + - Tehran Iran (Asiatech) + - "s3.ir-tbz-sh1.arvanstorage.com" + - Tabriz Iran (Shahriar) + +#### --s3-endpoint + Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. @@ -17325,6 +18337,49 @@ Properties: #### --s3-endpoint +Endpoint for OBS API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + +#### --s3-endpoint + Endpoint for Scaleway Object Storage. Properties: @@ -17339,6 +18394,8 @@ Properties: - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint #### --s3-endpoint @@ -17490,7 +18547,7 @@ Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: !AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp +- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp - Type: string - Required: false - Examples: @@ -17520,6 +18577,8 @@ Properties: - Wasabi AP Northeast 1 (Tokyo) endpoint - "s3.ap-northeast-2.wasabisys.com" - Wasabi AP Northeast 2 (Osaka) endpoint + - "s3.ir-thr-at1.arvanstorage.com" + - ArvanCloud Tehran Iran (Asiatech) endpoint #### --s3-location-constraint @@ -17588,6 +18647,100 @@ Properties: #### --s3-location-constraint +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "wuxi1" + - East China (Suzhou) + - "jinan1" + - East China (Jinan) + - "ningbo1" + - East China (Hangzhou) + - "shanghai1" + - East China (Shanghai-1) + - "zhengzhou1" + - Central China (Zhengzhou) + - "hunan1" + - Central China (Changsha-1) + - "zhuzhou1" + - Central China (Changsha-2) + - "guangzhou1" + - South China (Guangzhou-2) + - "dongguan1" + - South China (Guangzhou-3) + - "beijing1" + - North China (Beijing-1) + - "beijing2" + - North China (Beijing-2) + - "beijing4" + - North China (Beijing-3) + - "huhehaote1" + - North China (Huhehaote) + - "chengdu1" + - Southwest China (Chengdu) + - "chongqing1" + - Southwest China (Chongqing) + - "guiyang1" + - Southwest China (Guiyang) + - "xian1" + - Nouthwest China (Xian) + - "yunnan" + - Yunnan China (Kunming) + - "yunnan2" + - Yunnan China (Kunming-2) + - "tianjin1" + - Tianjin China (Tianjin) + - "jilin1" + - Jilin China (Changchun) + - "hubei1" + - Hubei China (Xiangyan) + - "jiangxi1" + - Jiangxi China (Nanchang) + - "gansu1" + - Gansu China (Lanzhou) + - "shanxi1" + - Shanxi China (Taiyuan) + - "liaoning1" + - Liaoning China (Shenyang) + - "hebei1" + - Hebei China (Shijiazhuang) + - "fujian1" + - Fujian China (Xiamen) + - "guangxi1" + - Guangxi China (Nanning) + - "anhui1" + - Anhui China (Huainan) + +#### --s3-location-constraint + +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "ir-thr-at1" + - Tehran Iran (Asiatech) + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + +#### --s3-location-constraint + Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter. @@ -17727,7 +18880,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: !AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS +- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS - Type: string - Required: false @@ -17746,7 +18899,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj +- Provider: !Storj,Cloudflare - Type: string - Required: false - Examples: @@ -17799,7 +18952,7 @@ Properties: - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17881,6 +19034,42 @@ Properties: #### --s3-storage-class +The storage class to use when storing new objects in ChinaMobile. + +Properties: + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "" + - Default + - "STANDARD" + - Standard storage class + - "GLACIER" + - Archive storage mode + - "STANDARD_IA" + - Infrequent access storage mode + +#### --s3-storage-class + +The storage class to use when storing new objects in ArvanCloud. + +Properties: + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "STANDARD" + - Standard storage class + +#### --s3-storage-class + The storage class to use when storing new objects in Tencent COS. Properties: @@ -17923,7 +19112,7 @@ Properties: ### Advanced options -Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi). #### --s3-bucket-acl @@ -17975,7 +19164,7 @@ Properties: - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17992,7 +19181,7 @@ Properties: - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -18010,7 +19199,7 @@ Properties: - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -18055,6 +19244,13 @@ most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size. +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with "-P" flag. Rclone treats chunk as sent when +it's buffered by the AWS SDK, when in fact it may still be uploading. +A bigger chunk size means a bigger AWS SDK buffer and progress +reporting more deviating from the truth. + + Properties: - Config: chunk_size @@ -18460,6 +19656,45 @@ Properties: - Type: Tristate - Default: unset +#### --s3-use-presigned-request + +Whether to use a presigned request or PutObject for single part uploads + +If this is false rclone will use PutObject from the AWS SDK to upload +an object. + +Versions of rclone < 1.59 use presigned requests to upload a single +part object and setting this flag to true will re-enable that +functionality. This shouldn't be necessary except in exceptional +circumstances or for testing. + + +Properties: + +- Config: use_presigned_request +- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST +- Type: bool +- Default: false + +### Metadata + +User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case. + +Here are the possible system metadata items for the s3 backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** | +| cache-control | Cache-Control header | string | no-cache | N | +| content-disposition | Content-Disposition header | string | inline | N | +| content-encoding | Content-Encoding header | string | gzip | N | +| content-language | Content-Language header | string | en-US | N | +| content-type | Content-Type header | string | text/plain | N | +| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N | +| tier | Tier of the object | string | GLACIER | **Y** | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the s3 backend. @@ -18470,7 +19705,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -18585,7 +19820,7 @@ Options: -### Anonymous access to public buckets ### +### Anonymous access to public buckets If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Your config should end @@ -18619,12 +19854,19 @@ You will be able to list and copy data but not upload it. This is the provider used as main example and described in the [configuration](#configuration) section above. ### AWS Snowball Edge -[AWS Snowball](https://aws.amazon.com/snowball/) is a hardware appliance used for transferring -bulk data back to AWS. Its main software interface is S3 object storage. -To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service' -be sure to set `upload_cutoff = 0` otherwise you will run into authentication header issues as -the snowball device does not support query parameter based authentication. +[AWS Snowball](https://aws.amazon.com/snowball/) is a hardware +appliance used for transferring bulk data back to AWS. Its main +software interface is S3 object storage. + +To use rclone with AWS Snowball Edge devices, configure as standard +for an 'S3 Compatible Service'. + +If using rclone pre v1.59 be sure to set `upload_cutoff = 0` otherwise +you will run into authentication header issues as the snowball device +does not support query parameter based authentication. + +With rclone v1.59 or later setting `upload_cutoff` should not be necessary. eg. ``` @@ -18663,10 +19905,11 @@ server_side_encryption = storage_class = ``` -If you are using an older version of CEPH, e.g. 10.2.x Jewel, then you -may need to supply the parameter `--s3-upload-cutoff 0` or put this in -the config file as `upload_cutoff 0` to work around a bug which causes -uploading of small files to fail. +If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a +version of rclone before v1.59 then you may need to supply the +parameter `--s3-upload-cutoff 0` or put this in the config file as +`upload_cutoff 0` to work around a bug which causes uploading of small +files to fail. Note also that Ceph sometimes puts `/` in the passwords it gives users. If you read the secret access key using the command line tools @@ -18693,6 +19936,106 @@ removed). Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine. +### Cloudflare R2 {#cloudflare-r2} + +[Cloudflare R2](https://blog.cloudflare.com/r2-open-beta/) Storage +allows developers to store large amounts of unstructured data without +the costly egress bandwidth fees associated with typical cloud storage +services. + +Here is an example of making a Cloudflare R2 configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +Note that all buckets are private, and all are stored in the same +"auto" region. It is necessary to use Cloudflare workers to share the +content of a bucket publicly. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> r2 +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +... +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) +... +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +... +XX / Cloudflare R2 Storage + \ (Cloudflare) +... +provider> Cloudflare +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency. + \ (auto) +region> 1 +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Enter a value. Press Enter to leave empty. +endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave your config looking something like: + +``` +[r2] +type = s3 +provider = Cloudflare +access_key_id = ACCESS_KEY +secret_access_key = SECRET_ACCESS_KEY +region = auto +endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com +acl = private +``` + +Now run `rclone lsf r2:` to see your buckets and `rclone lsf +r2:bucket` to look within a bucket. + ### Dreamhost Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is @@ -18762,6 +20105,133 @@ Once configured, you can create a new Space and begin copying files. For example rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space ``` +### Huawei OBS {#huawei-obs} + +Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere. + +OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file. +``` +[obs] +type = s3 +provider = HuaweiOBS +access_key_id = your-access-key-id +secret_access_key = your-secret-access-key +region = af-south-1 +endpoint = obs.af-south-1.myhuaweicloud.com +acl = private +``` + +Or you can also configure via the interactive command line: +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> obs +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) +[snip] +Storage> 5 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] + 9 / Huawei Object Storage Service + \ (HuaweiOBS) +[snip] +provider> 9 +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> your-access-key-id +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> your-secret-access-key +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / AF-Johannesburg + \ (af-south-1) + 2 / AP-Bangkok + \ (ap-southeast-2) +[snip] +region> 1 +Option endpoint. +Endpoint for OBS API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / AF-Johannesburg + \ (obs.af-south-1.myhuaweicloud.com) + 2 / AP-Bangkok + \ (obs.ap-southeast-2.myhuaweicloud.com) +[snip] +endpoint> 1 +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) +[snip] +acl> 1 +Edit advanced config? +y) Yes +n) No (default) +y/n> +-------------------- +[obs] +type = s3 +provider = HuaweiOBS +access_key_id = your-access-key-id +secret_access_key = your-secret-access-key +region = af-south-1 +endpoint = obs.af-south-1.myhuaweicloud.com +acl = private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +obs s3 + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +``` ### IBM COS (S3) @@ -18791,12 +20261,12 @@ Choose a number from below, or type in your own value \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] - 23 / http Connection + 23 / HTTP \ "http" Storage> 3 ``` @@ -18935,6 +20405,116 @@ acl> 1 rclone delete IBM-COS-XREGION:newbucket/file.txt ``` +### IDrive e2 {#idrive-e2} + +Here is an example of making an [IDrive e2](https://www.idrive.com/e2/) +configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> e2 + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / IDrive e2 + \ (IDrive) +[snip] +provider> IDrive + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> YOUR_ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> YOUR_SECRET_KEY + +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. + 5 | Bucket owner gets READ access. + | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-read) + / Both the object owner and the bucket owner get FULL_CONTROL over the object. + 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-full-control) +acl> + +Edit advanced config? +y) Yes +n) No (default) +y/n> + +Configuration complete. +Options: +- type: s3 +- provider: IDrive +- access_key_id: YOUR_ACCESS_KEY +- secret_access_key: YOUR_SECRET_KEY +- endpoint: q9d9.la12.idrivee2-5.com +Keep this "e2" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + ### Minio [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. @@ -19048,6 +20628,9 @@ server_side_encryption = storage_class = ``` +[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`. +So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above) + ### Seagate Lyve Cloud {#lyve} [Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3 @@ -19073,7 +20656,7 @@ Choose `s3` backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS \ (s3) [snip] Storage> s3 @@ -19260,7 +20843,7 @@ name> wasabi Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 (also Dreamhost, Ceph, Minio) +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) \ "s3" [snip] Storage> s3 @@ -19374,7 +20957,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -19464,6 +21047,364 @@ d) Delete this remote y/e/d> y ``` +### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos} + +Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/) +configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> ChinaMobile +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. + ... + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + \ (s3) + ... +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + ... + 4 / China Mobile Ecloud Elastic Object Storage (EOS) + \ (ChinaMobile) + ... +provider> ChinaMobile +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> accesskeyid +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> secretaccesskey +Option endpoint. +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / The default endpoint - a good choice if you are unsure. + 1 | East China (Suzhou) + \ (eos-wuxi-1.cmecloud.cn) + 2 / East China (Jinan) + \ (eos-jinan-1.cmecloud.cn) + 3 / East China (Hangzhou) + \ (eos-ningbo-1.cmecloud.cn) + 4 / East China (Shanghai-1) + \ (eos-shanghai-1.cmecloud.cn) + 5 / Central China (Zhengzhou) + \ (eos-zhengzhou-1.cmecloud.cn) + 6 / Central China (Changsha-1) + \ (eos-hunan-1.cmecloud.cn) + 7 / Central China (Changsha-2) + \ (eos-zhuzhou-1.cmecloud.cn) + 8 / South China (Guangzhou-2) + \ (eos-guangzhou-1.cmecloud.cn) + 9 / South China (Guangzhou-3) + \ (eos-dongguan-1.cmecloud.cn) +10 / North China (Beijing-1) + \ (eos-beijing-1.cmecloud.cn) +11 / North China (Beijing-2) + \ (eos-beijing-2.cmecloud.cn) +12 / North China (Beijing-3) + \ (eos-beijing-4.cmecloud.cn) +13 / North China (Huhehaote) + \ (eos-huhehaote-1.cmecloud.cn) +14 / Southwest China (Chengdu) + \ (eos-chengdu-1.cmecloud.cn) +15 / Southwest China (Chongqing) + \ (eos-chongqing-1.cmecloud.cn) +16 / Southwest China (Guiyang) + \ (eos-guiyang-1.cmecloud.cn) +17 / Nouthwest China (Xian) + \ (eos-xian-1.cmecloud.cn) +18 / Yunnan China (Kunming) + \ (eos-yunnan.cmecloud.cn) +19 / Yunnan China (Kunming-2) + \ (eos-yunnan-2.cmecloud.cn) +20 / Tianjin China (Tianjin) + \ (eos-tianjin-1.cmecloud.cn) +21 / Jilin China (Changchun) + \ (eos-jilin-1.cmecloud.cn) +22 / Hubei China (Xiangyan) + \ (eos-hubei-1.cmecloud.cn) +23 / Jiangxi China (Nanchang) + \ (eos-jiangxi-1.cmecloud.cn) +24 / Gansu China (Lanzhou) + \ (eos-gansu-1.cmecloud.cn) +25 / Shanxi China (Taiyuan) + \ (eos-shanxi-1.cmecloud.cn) +26 / Liaoning China (Shenyang) + \ (eos-liaoning-1.cmecloud.cn) +27 / Hebei China (Shijiazhuang) + \ (eos-hebei-1.cmecloud.cn) +28 / Fujian China (Xiamen) + \ (eos-fujian-1.cmecloud.cn) +29 / Guangxi China (Nanning) + \ (eos-guangxi-1.cmecloud.cn) +30 / Anhui China (Huainan) + \ (eos-anhui-1.cmecloud.cn) +endpoint> 1 +Option location_constraint. +Location constraint - must match endpoint. +Used when creating buckets only. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / East China (Suzhou) + \ (wuxi1) + 2 / East China (Jinan) + \ (jinan1) + 3 / East China (Hangzhou) + \ (ningbo1) + 4 / East China (Shanghai-1) + \ (shanghai1) + 5 / Central China (Zhengzhou) + \ (zhengzhou1) + 6 / Central China (Changsha-1) + \ (hunan1) + 7 / Central China (Changsha-2) + \ (zhuzhou1) + 8 / South China (Guangzhou-2) + \ (guangzhou1) + 9 / South China (Guangzhou-3) + \ (dongguan1) +10 / North China (Beijing-1) + \ (beijing1) +11 / North China (Beijing-2) + \ (beijing2) +12 / North China (Beijing-3) + \ (beijing4) +13 / North China (Huhehaote) + \ (huhehaote1) +14 / Southwest China (Chengdu) + \ (chengdu1) +15 / Southwest China (Chongqing) + \ (chongqing1) +16 / Southwest China (Guiyang) + \ (guiyang1) +17 / Nouthwest China (Xian) + \ (xian1) +18 / Yunnan China (Kunming) + \ (yunnan) +19 / Yunnan China (Kunming-2) + \ (yunnan2) +20 / Tianjin China (Tianjin) + \ (tianjin1) +21 / Jilin China (Changchun) + \ (jilin1) +22 / Hubei China (Xiangyan) + \ (hubei1) +23 / Jiangxi China (Nanchang) + \ (jiangxi1) +24 / Gansu China (Lanzhou) + \ (gansu1) +25 / Shanxi China (Taiyuan) + \ (shanxi1) +26 / Liaoning China (Shenyang) + \ (liaoning1) +27 / Hebei China (Shijiazhuang) + \ (hebei1) +28 / Fujian China (Xiamen) + \ (fujian1) +29 / Guangxi China (Nanning) + \ (guangxi1) +30 / Anhui China (Huainan) + \ (anhui1) +location_constraint> 1 +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. +acl> private +Option server_side_encryption. +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / None + \ () + 2 / AES256 + \ (AES256) +server_side_encryption> +Option storage_class. +The storage class to use when storing new objects in ChinaMobile. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Default + \ () + 2 / Standard storage class + \ (STANDARD) + 3 / Archive storage mode + \ (GLACIER) + 4 / Infrequent access storage mode + \ (STANDARD_IA) +storage_class> +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[ChinaMobile] +type = s3 +provider = ChinaMobile +access_key_id = accesskeyid +secret_access_key = secretaccesskey +endpoint = eos-wuxi-1.cmecloud.cn +location_constraint = wuxi1 +acl = private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### ArvanCloud {#arvan-cloud} + +[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. +It gives you access to backup and archived files and allows sharing. +Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service. + +ArvanCloud provides an S3 interface which can be configured for use with +rclone like this. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +n/s> n +name> ArvanCloud +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) + \ "s3" +[snip] +Storage> s3 +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> YOURACCESSKEY +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> YOURSECRETACCESSKEY +Region to connect to. +Choose a number from below, or type in your own value + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia, or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" +[snip] +region> +Endpoint for S3 API. +Leave blank if using ArvanCloud to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> s3.arvanstorage.com +Location constraint - must be set to match the Region. Used when creating buckets only. +Choose a number from below, or type in your own value + 1 / Empty for Iran-Tehran Region. + \ "" +[snip] +location_constraint> +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" +[snip] +acl> +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" +server_side_encryption> +The storage class to use when storing objects in S3. +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" +storage_class> +Remote config +-------------------- +[ArvanCloud] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = ir-thr-at1 +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave the config file looking like this. + +``` +[ArvanCloud] +type = s3 +provider = ArvanCloud +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +``` + ### Tencent COS {#tencent-cos} [Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. @@ -19497,7 +21438,7 @@ Choose a number from below, or type in your own value \ "alias" 3 / Amazon Drive \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -19710,8 +21651,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Backblaze B2 @@ -19883,6 +21823,11 @@ the file instead of hiding it. Old versions of files, where available, are visible using the `--b2-versions` flag. +It is also possible to view a bucket as it was at a certain point in time, +using the `--b2-version-at` flag. This will show the file versions as they +were at that time, showing files that have been deleted afterwards, and +hiding files that were created since. + If you wish to remove all the old versions then you can use the `rclone cleanup remote:bucket` command which will delete all the old versions of files, leaving the current ones intact. You can also @@ -20033,7 +21978,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx ### Standard options -Here are the standard options specific to b2 (Backblaze B2). +Here are the Standard options specific to b2 (Backblaze B2). #### --b2-account @@ -20070,7 +22015,7 @@ Properties: ### Advanced options -Here are the advanced options specific to b2 (Backblaze B2). +Here are the Advanced options specific to b2 (Backblaze B2). #### --b2-endpoint @@ -20120,6 +22065,20 @@ Properties: - Type: bool - Default: false +#### --b2-version-at + +Show file versions as they were at the specified time. + +Note that when using this no file write operations are permitted, +so you can't upload files or delete them. + +Properties: + +- Config: version_at +- Env Var: RCLONE_B2_VERSION_AT +- Type: Time +- Default: off + #### --b2-upload-cutoff Cutoff for switching to chunked upload. @@ -20271,8 +22230,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Box @@ -20538,7 +22496,7 @@ the `root_folder_id` in the config. ### Standard options -Here are the standard options specific to box (Box). +Here are the Standard options specific to box (Box). #### --box-client-id @@ -20612,7 +22570,7 @@ Properties: ### Advanced options -Here are the advanced options specific to box (Box). +Here are the Advanced options specific to box (Box). #### --box-token @@ -20737,8 +22695,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Cache (DEPRECATED) @@ -21044,7 +23001,7 @@ Params: ### Standard options -Here are the standard options specific to cache (Cache a remote). +Here are the Standard options specific to cache (Cache a remote). #### --cache-remote @@ -21160,7 +23117,7 @@ Properties: ### Advanced options -Here are the advanced options specific to cache (Cache a remote). +Here are the Advanced options specific to cache (Cache a remote). #### --cache-plex-token @@ -21409,7 +23366,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -21733,7 +23690,7 @@ Changing `transactions` is dangerous and requires explicit migration. ### Standard options -Here are the standard options specific to chunker (Transparently chunk/split large files). +Here are the Standard options specific to chunker (Transparently chunk/split large files). #### --chunker-remote @@ -21792,7 +23749,7 @@ Properties: ### Advanced options -Here are the advanced options specific to chunker (Transparently chunk/split large files). +Here are the Advanced options specific to chunker (Transparently chunk/split large files). #### --chunker-name-format @@ -22036,7 +23993,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to sharefile (Citrix Sharefile). +Here are the Standard options specific to sharefile (Citrix Sharefile). #### --sharefile-root-folder-id @@ -22065,7 +24022,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sharefile (Citrix Sharefile). +Here are the Advanced options specific to sharefile (Citrix Sharefile). #### --sharefile-upload-cutoff @@ -22137,8 +24094,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Crypt @@ -22556,7 +24512,7 @@ check the checksums properly. ### Standard options -Here are the standard options specific to crypt (Encrypt/Decrypt a remote). +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-remote @@ -22641,7 +24597,7 @@ Properties: ### Advanced options -Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-server-side-across-configs @@ -22721,6 +24677,12 @@ Properties: - Encode using base32768. Suitable if your remote counts UTF-16 or - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive) +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the crypt backend. @@ -22731,7 +24693,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -22987,7 +24949,7 @@ size of the uncompressed file. The file names should not be changed by anything ### Standard options -Here are the standard options specific to compress (Compress a remote). +Here are the Standard options specific to compress (Compress a remote). #### --compress-remote @@ -23016,7 +24978,7 @@ Properties: ### Advanced options -Here are the advanced options specific to compress (Compress a remote). +Here are the Advanced options specific to compress (Compress a remote). #### --compress-level @@ -23053,6 +25015,170 @@ Properties: - Type: SizeSuffix - Default: 20Mi +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + +# Combine + +The `combine` backend joins remotes together into a single directory +tree. + +For example you might have a remote for images on one provider: + +``` +$ rclone tree s3:imagesbucket +/ +├── image1.jpg +└── image2.jpg +``` + +And a remote for files on another: + +``` +$ rclone tree drive:important/files +/ +├── file1.txt +└── file2.txt +``` + +The `combine` backend can join these together into a synthetic +directory structure like this: + +``` +$ rclone tree combined: +/ +├── files +│ ├── file1.txt +│ └── file2.txt +└── images + ├── image1.jpg + └── image2.jpg +``` + +You'd do this by specifying an `upstreams` parameter in the config +like this + + upstreams = images=s3:imagesbucket files=drive:important/files + +During the initial setup with `rclone config` you will specify the +upstreams remotes as a space separated list. The upstream remotes can +either be a local paths or other remotes. + +## Configuration + +Here is an example of how to make a combine called `remote` for the +example above. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +... +XX / Combine several remotes into one + \ (combine) +... +Storage> combine +Option upstreams. +Upstreams for combining +These should be in the form + dir=remote:path dir2=remote2:path +Where before the = is specified the root directory and after is the remote to +put there. +Embedded spaces can be added using quotes + "dir=remote:path with space" "dir2=remote2:path with space" +Enter a fs.SpaceSepList value. +upstreams> images=s3:imagesbucket files=drive:important/files +-------------------- +[remote] +type = combine +upstreams = images=s3:imagesbucket files=drive:important/files +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### Configuring for Google Drive Shared Drives + +Rclone has a convenience feature for making a combine backend for all +the shared drives you have access to. + +Assuming your main (non shared drive) Google drive remote is called +`drive:` you would run + + rclone backend -o config drives drive: + +This would produce something like this: + + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + remote = "My Drive=My Drive:" "Test Drive=Test Drive:" + +If you then add that config to your config file (find it with `rclone +config file`) then you can access all the shared drives in one place +with the `AllDrives:` remote. + +See [the Google Drive docs](https://rclone.org/drive/#drives) for full info. + + +### Standard options + +Here are the Standard options specific to combine (Combine several remotes into one). + +#### --combine-upstreams + +Upstreams for combining + +These should be in the form + + dir=remote:path dir2=remote2:path + +Where before the = is specified the root directory and after is the remote to +put there. + +Embedded spaces can be added using quotes + + "dir=remote:path with space" "dir2=remote2:path with space" + + + +Properties: + +- Config: upstreams +- Env Var: RCLONE_COMBINE_UPSTREAMS +- Type: SpaceSepList +- Default: + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + # Dropbox @@ -23234,7 +25360,7 @@ finishes up the last batch using this mode. ### Standard options -Here are the standard options specific to dropbox (Dropbox). +Here are the Standard options specific to dropbox (Dropbox). #### --dropbox-client-id @@ -23264,7 +25390,7 @@ Properties: ### Advanced options -Here are the advanced options specific to dropbox (Dropbox). +Here are the Advanced options specific to dropbox (Dropbox). #### --dropbox-token @@ -23687,7 +25813,7 @@ The ID for "S3 Storage" would be `120673761`. ### Standard options -Here are the standard options specific to filefabric (Enterprise File Fabric). +Here are the Standard options specific to filefabric (Enterprise File Fabric). #### --filefabric-url @@ -23746,7 +25872,7 @@ Properties: ### Advanced options -Here are the advanced options specific to filefabric (Enterprise File Fabric). +Here are the Advanced options specific to filefabric (Enterprise File Fabric). #### --filefabric-token @@ -23844,7 +25970,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -XX / FTP Connection +XX / FTP \ "ftp" [snip] Storage> ftp @@ -23943,7 +26069,7 @@ Just hit a selection number when prompted. ### Standard options -Here are the standard options specific to ftp (FTP Connection). +Here are the Standard options specific to ftp (FTP). #### --ftp-host @@ -24026,7 +26152,7 @@ Properties: ### Advanced options -Here are the advanced options specific to ftp (FTP Connection). +Here are the Advanced options specific to ftp (FTP). #### --ftp-concurrency @@ -24072,6 +26198,17 @@ Properties: - Type: bool - Default: false +#### --ftp-disable-utf8 + +Disable using UTF-8 even if server advertises support. + +Properties: + +- Config: disable_utf8 +- Env Var: RCLONE_FTP_DISABLE_UTF8 +- Type: bool +- Default: false + #### --ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk) @@ -24200,8 +26337,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) The implementation of : `--dump headers`, `--dump bodies`, `--dump auth` for debugging isn't the same as @@ -24506,7 +26642,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-client-id @@ -24781,7 +26917,7 @@ Properties: ### Advanced options -Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-token @@ -24820,6 +26956,40 @@ Properties: - Type: string - Required: false +#### --gcs-no-check-bucket + +If set, don't attempt to check the bucket exists or create it. + +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. + + +Properties: + +- Config: no_check_bucket +- Env Var: RCLONE_GCS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +#### --gcs-decompress + +If set this will decompress gzip encoded objects. + +It is possible to upload objects to GCS with "Content-Encoding: gzip" +set. Normally rclone will download these files files as compressed objects. + +If this flag is set then rclone will decompress these files with +"Content-Encoding: gzip" as they are received. This means that rclone +can't check the size and hash but the file contents will be decompressed. + + +Properties: + +- Config: decompress +- Env Var: RCLONE_GCS_DECOMPRESS +- Type: bool +- Default: false + #### --gcs-encoding The encoding for the backend. @@ -24842,8 +27012,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Google Drive @@ -24900,8 +27069,6 @@ Choose a number from below, or type in your own value 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 -ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). -root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config @@ -25003,7 +27170,7 @@ directories. ### Root folder ID -You can set the `root_folder_id` for rclone. This is the directory +This option has been moved to the advanced section. You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root of your drive. @@ -25351,23 +27518,28 @@ represent the currently available conversions. | Extension | Mime Type | Description | | --------- |-----------| ------------| +| bmp | image/bmp | Windows Bitmap format | | csv | text/csv | Standard CSV format for Spreadsheets | +| doc | application/msword | Classic Word file | | docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | | epub | application/epub+zip | E-book format | | html | text/html | An HTML Document | | jpg | image/jpeg | A JPEG Image File | -| json | application/vnd.google-apps.script+json | JSON Text Format | +| json | application/vnd.google-apps.script+json | JSON Text Format for Google Apps scripts | | odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | | ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | odt | application/vnd.oasis.opendocument.text | Openoffice Document | | pdf | application/pdf | Adobe PDF Format | +| pjpeg | image/pjpeg | Progressive JPEG Image | | png | image/png | PNG Image Format| | pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | | rtf | application/rtf | Rich Text Format | | svg | image/svg+xml | Scalable Vector Graphics Format | | tsv | text/tab-separated-values | Standard TSV format for spreadsheets | | txt | text/plain | Plain Text | +| wmf | application/x-msmetafile | Windows Meta File | +| xls | application/vnd.ms-excel | Classic Excel file | | xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | | zip | application/zip | A ZIP file of HTML, Images CSS | @@ -25387,7 +27559,7 @@ Google Documents. ### Standard options -Here are the standard options specific to drive (Google Drive). +Here are the Standard options specific to drive (Google Drive). #### --drive-client-id @@ -25442,22 +27614,6 @@ Properties: - Allows read-only access to file metadata but - does not allow any access to read or download file content. -#### --drive-root-folder-id - -ID of the root folder. -Leave blank normally. - -Fill in to access "Computers" folders (see docs), or for rclone to use -a non root folder as its starting point. - - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -- Type: string -- Required: false - #### --drive-service-account-file Service Account Credentials JSON file path. @@ -25487,7 +27643,7 @@ Properties: ### Advanced options -Here are the advanced options specific to drive (Google Drive). +Here are the Advanced options specific to drive (Google Drive). #### --drive-token @@ -25526,6 +27682,22 @@ Properties: - Type: string - Required: false +#### --drive-root-folder-id + +ID of the root folder. +Leave blank normally. + +Fill in to access "Computers" folders (see docs), or for rclone to use +a non root folder as its starting point. + + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID +- Type: string +- Required: false + #### --drive-service-account-credentials Service Account Credentials JSON blob. @@ -26006,6 +28178,34 @@ Properties: - Type: bool - Default: false +#### --drive-resource-key + +Resource key for accessing a link-shared file. + +If you need to access files shared with a link like this + + https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing + +Then you will need to use the first part "XXX" as the "root_folder_id" +and the second part "YYY" as the "resource_key" otherwise you will get +404 not found errors when trying to access the directory. + +See: https://developers.google.com/drive/api/guides/resource-keys + +This resource key requirement only applies to a subset of old files. + +Note also that opening the folder once in the web interface (with the +user you've authenticated rclone with) seems to be enough so that the +resource key is no needed. + + +Properties: + +- Config: resource_key +- Env Var: RCLONE_DRIVE_RESOURCE_KEY +- Type: string +- Required: false + #### --drive-encoding The encoding for the backend. @@ -26029,7 +28229,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -26131,7 +28331,7 @@ This will return a JSON list of objects like this With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the -drives found. +drives found and a combined drive. [My Drive] type = alias @@ -26141,10 +28341,15 @@ drives found. type = alias remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: -Adding this to the rclone config file will cause those team drives to -be accessible with the aliases shown. This may require manual editing -of the names. + [AllDrives] + type = combine + remote = "My Drive=My Drive:" "Test Drive=Test Drive:" +Adding this to the rclone config file will cause those team drives to +be accessible with the aliases shown. Any illegal charactes will be +substituted with "_" and duplicate names will have numbers suffixed. +It will also add a remote called AllDrives which shows all the shared +drives combined into one directory tree. ### untrash @@ -26201,6 +28406,18 @@ attempted if possible. Use the -i flag to see what would be copied before copying. +### exportformats + +Dump the export formats for debug purposes + + rclone backend exportformats remote: [options] [+] + +### importformats + +Dump the import formats for debug purposes + + rclone backend importformats remote: [options] [+] + ## Limitations @@ -26217,8 +28434,11 @@ and upload the files if you prefer. ### Limitations of Google Docs -Google docs will appear as size -1 in `rclone ls` and as size 0 in -anything which uses the VFS layer, e.g. `rclone mount`, `rclone serve`. +Google docs will appear as size -1 in `rclone ls`, `rclone ncdu` etc, +and as size 0 in anything which uses the VFS layer, e.g. `rclone mount` +and `rclone serve`. When calculating directory totals, e.g. in +`rclone size` and `rclone ncdu`, they will be counted in as empty +files. This is because rclone can't find out the size of the Google docs without downloading them. @@ -26297,8 +28517,9 @@ enter "Developer Contact Email" (your own email is OK); then click on "Save" (al Click again on "Credentials" on the left panel to go back to the "Credentials" screen. -(PS: if you are a GSuite user, you could also select "Internal" instead -of "External" above, but this has not been tested/documented so far). + (PS: if you are a GSuite user, you could also select "Internal" instead +of "External" above, but this will restrict API use to Google Workspace +users in your organisation). 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". @@ -26306,14 +28527,18 @@ then select "OAuth client ID". 7. Choose an application type of "Desktop app" and click "Create". (the default name is fine) 8. It will show you a client ID and client secret. Make a note of these. + + (If you selected "External" at Step 5 continue to "Publish App" in the Steps 9 and 10. + If you chose "Internal" you don't need to publish and can skip straight to + Step 11.) 9. Go to "Oauth consent screen" and press "Publish App" -10. Provide the noted client ID and client secret to rclone. - -11. Click "OAuth consent screen", then click "PUBLISH APP" button and +10. Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users". +11. Provide the noted client ID and client secret to rclone. + Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right @@ -26552,7 +28777,7 @@ This is similar to the Sharing tab in the Google Photos web interface. ### Standard options -Here are the standard options specific to google photos (Google Photos). +Here are the Standard options specific to google photos (Google Photos). #### --gphotos-client-id @@ -26596,7 +28821,7 @@ Properties: ### Advanced options -Here are the advanced options specific to google photos (Google Photos). +Here are the Advanced options specific to google photos (Google Photos). #### --gphotos-token @@ -26964,7 +29189,7 @@ or by full re-read/re-write of the files. ### Standard options -Here are the standard options specific to hasher (Better checksums for other remotes). +Here are the Standard options specific to hasher (Better checksums for other remotes). #### --hasher-remote @@ -27001,7 +29226,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hasher (Better checksums for other remotes). +Here are the Advanced options specific to hasher (Better checksums for other remotes). #### --hasher-auto-size @@ -27014,6 +29239,12 @@ Properties: - Type: SizeSuffix - Default: 0 +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the hasher backend. @@ -27024,7 +29255,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -27277,7 +29508,7 @@ Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid ### Standard options -Here are the standard options specific to hdfs (Hadoop distributed file system). +Here are the Standard options specific to hdfs (Hadoop distributed file system). #### --hdfs-namenode @@ -27308,7 +29539,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hdfs (Hadoop distributed file system). +Here are the Advanced options specific to hdfs (Hadoop distributed file system). #### --hdfs-service-principal-name @@ -27364,6 +29595,444 @@ Properties: - No server-side `Move` or `DirMove`. - Checksums not implemented. +# HiDrive + +Paths are specified as `remote:path` + +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + +The initial setup for hidrive involves getting a token from HiDrive +which you need to do in your browser. +`rclone config` walks you through it. + +## Configuration + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / HiDrive + \ "hidrive" +[snip] +Storage> hidrive +OAuth Client Id - Leave blank normally. +client_id> +OAuth Client Secret - Leave blank normally. +client_secret> +Access permissions that rclone should use when requesting access from HiDrive. +Leave blank normally. +scope_access> +Edit advanced config? +y/n> n +Use auto config? +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = hidrive +token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"} +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +**You should be aware that OAuth-tokens can be used to access your account +and hence should not be shared with other persons.** +See the [below section](#keeping-your-tokens-safe) for more information. + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from HiDrive. This only runs from the moment it opens +your browser to the moment you get back the verification code. +The webserver runs on `http://127.0.0.1:53682/`. +If local port `53682` is protected by a firewall you may need to temporarily +unblock the firewall to complete authorization. + +Once configured you can then use `rclone` like this, + +List directories in top level of your HiDrive root folder + + rclone lsd remote: + +List all the files in your HiDrive filesystem + + rclone ls remote: + +To copy a local directory to a HiDrive directory called backup + + rclone copy /home/source remote:backup + +### Keeping your tokens safe + +Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. +Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. +Therefore you should make sure no one else can access your configuration. + +It is possible to encrypt rclone's configuration file. +You can find information on securing your configuration file by viewing the [configuration encryption docs](https://rclone.org/docs/#configuration-encryption). + +### Invalid refresh token + +As can be verified [here](https://developer.hidrive.com/basics-flows/), +each `refresh_token` (for Native Applications) is valid for 60 days. +If used to access HiDrivei, its validity will be automatically extended. + +This means that if you + + * Don't use the HiDrive remote for 60 days + +then rclone will return an error which includes a text +that implies the refresh token is *invalid* or *expired*. + +To fix this you will need to authorize rclone to access your HiDrive account again. + +Using + + rclone config reconnect remote: + +the process is very similar to the process of initial setup exemplified before. + +### Modified time and hashes + +HiDrive allows modification times to be set on objects accurate to 1 second. + +HiDrive supports [its own hash type](https://static.hidrive.com/dev/0001) +which is used to verify the integrety of file contents after successful transfers. + +### Restricted filename characters + +HiDrive cannot store files or folders that include +`/` (0x2F) or null-bytes (0x00) in their name. +Any other characters can be used in the names of files or folders. +Additionally, files or folders cannot be named either of the following: `.` or `..` + +Therefore rclone will automatically replace these characters, +if files or folders are stored or accessed with such names. + +You can read about how this filename encoding works in general +[here](overview/#restricted-filenames). + +Keep in mind that HiDrive only supports file or folder names +with a length of 255 characters or less. + +### Transfers + +HiDrive limits file sizes per single request to a maximum of 2 GiB. +To allow storage of larger files and allow for better upload performance, +the hidrive backend will use a chunked transfer for files larger than 96 MiB. +Rclone will upload multiple parts/chunks of the file at the same time. +Chunks in the process of being uploaded are buffered in memory, +so you may want to restrict this behaviour on systems with limited resources. + +You can customize this behaviour using the following options: + +* `chunk_size`: size of file parts +* `upload_cutoff`: files larger or equal to this in size will use a chunked transfer +* `upload_concurrency`: number of file-parts to upload at the same time + +See the below section about configuration options for more details. + +### Root folder + +You can set the root folder for rclone. +This is the directory that rclone considers to be the root of your HiDrive. + +Usually, you will leave this blank, and rclone will use the root of the account. + +However, you can set this to restrict rclone to a specific folder hierarchy. + +This works by prepending the contents of the `root_prefix` option +to any paths accessed by rclone. +For example, the following two ways to access the home directory are equivalent: + + rclone lsd --hidrive-root-prefix="/users/test/" remote:path + + rclone lsd remote:/users/test/path + +See the below section about configuration options for more details. + +### Directory member count + +By default, rclone will know the number of directory members contained in a directory. +For example, `rclone lsd` uses this information. + +The acquisition of this information will result in additional time costs for HiDrive's API. +When dealing with large directory structures, it may be desirable to circumvent this time cost, +especially when this information is not explicitly needed. +For this, the `disable_fetching_member_count` option can be used. + +See the below section about configuration options for more details. + + +### Standard options + +Here are the Standard options specific to hidrive (HiDrive). + +#### --hidrive-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_HIDRIVE_CLIENT_ID +- Type: string +- Required: false + +#### --hidrive-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_HIDRIVE_CLIENT_SECRET +- Type: string +- Required: false + +#### --hidrive-scope-access + +Access permissions that rclone should use when requesting access from HiDrive. + +Properties: + +- Config: scope_access +- Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS +- Type: string +- Default: "rw" +- Examples: + - "rw" + - Read and write access to resources. + - "ro" + - Read-only access to resources. + +### Advanced options + +Here are the Advanced options specific to hidrive (HiDrive). + +#### --hidrive-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_HIDRIVE_TOKEN +- Type: string +- Required: false + +#### --hidrive-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_HIDRIVE_AUTH_URL +- Type: string +- Required: false + +#### --hidrive-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_HIDRIVE_TOKEN_URL +- Type: string +- Required: false + +#### --hidrive-scope-role + +User-level that rclone should use when requesting access from HiDrive. + +Properties: + +- Config: scope_role +- Env Var: RCLONE_HIDRIVE_SCOPE_ROLE +- Type: string +- Default: "user" +- Examples: + - "user" + - User-level access to management permissions. + - This will be sufficient in most cases. + - "admin" + - Extensive access to management permissions. + - "owner" + - Full access to management permissions. + +#### --hidrive-root-prefix + +The root/parent folder for all paths. + +Fill in to use the specified folder as the parent for all paths given to the remote. +This way rclone can use any folder as its starting point. + +Properties: + +- Config: root_prefix +- Env Var: RCLONE_HIDRIVE_ROOT_PREFIX +- Type: string +- Default: "/" +- Examples: + - "/" + - The topmost directory accessible by rclone. + - This will be equivalent with "root" if rclone uses a regular HiDrive user account. + - "root" + - The topmost directory of the HiDrive user account + - "" + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". + +#### --hidrive-endpoint + +Endpoint for the service. + +This is the URL that API-calls will be made to. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_HIDRIVE_ENDPOINT +- Type: string +- Default: "https://api.hidrive.strato.com/2.1" + +#### --hidrive-disable-fetching-member-count + +Do not fetch number of objects in directories unless it is absolutely necessary. + +Requests may be faster if the number of objects in subdirectories is not fetched. + +Properties: + +- Config: disable_fetching_member_count +- Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT +- Type: bool +- Default: false + +#### --hidrive-chunk-size + +Chunksize for chunked uploads. + +Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size. + +The upper limit for this is 2147483647 bytes (about 2.000Gi). +That is the maximum amount of bytes a single upload-operation will support. +Setting this above the upper limit or to a negative value will cause uploads to fail. + +Setting this to larger values may increase the upload speed at the cost of using more memory. +It can be set to smaller values smaller to save on memory. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_HIDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 48Mi + +#### --hidrive-upload-cutoff + +Cutoff/Threshold for chunked uploads. + +Any files larger than this will be uploaded in chunks of the configured chunksize. + +The upper limit for this is 2147483647 bytes (about 2.000Gi). +That is the maximum amount of bytes a single upload-operation will support. +Setting this above the upper limit will cause uploads to fail. + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 96Mi + +#### --hidrive-upload-concurrency + +Concurrency for chunked uploads. + +This is the upper limit for how many transfers for the same file are running concurrently. +Setting this above to a value smaller than 1 will cause uploads to deadlock. + +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. + +Properties: + +- Config: upload_concurrency +- Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY +- Type: int +- Default: 4 + +#### --hidrive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_HIDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,Dot + + + +## Limitations + +### Symbolic links + +HiDrive is able to store symbolic links (*symlinks*) by design, +for example, when unpacked from a zip archive. + +There exists no direct mechanism to manage native symlinks in remotes. +As such this implementation has chosen to ignore any native symlinks present in the remote. +rclone will not be able to access or show any symlinks stored in the hidrive-remote. +This means symlinks cannot be individually removed, copied, or moved, +except when removing, copying, or moving the parent folder. + +*This does not affect the `.rclonelink`-files +that rclone uses to encode and store symbolic links.* + +### Sparse files + +It is possible to store sparse files in HiDrive. + +Note that copying a sparse file will expand the holes +into null-byte (0x00) regions that will then consume disk space. +Likewise, when downloading a sparse file, +the resulting file will have null-byte regions in the place of file holes. + # HTTP The HTTP remote is a read only remote for reading files of a @@ -27413,7 +30082,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / http Connection +XX / HTTP \ "http" [snip] Storage> http @@ -27487,11 +30156,11 @@ or: ### Standard options -Here are the standard options specific to http (http Connection). +Here are the Standard options specific to http (HTTP). #### --http-url -URL of http host to connect to. +URL of HTTP host to connect to. E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password. @@ -27504,7 +30173,7 @@ Properties: ### Advanced options -Here are the advanced options specific to http (http Connection). +Here are the Advanced options specific to http (HTTP). #### --http-headers @@ -27581,8 +30250,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Hubic @@ -27690,7 +30358,7 @@ are the same. ### Standard options -Here are the standard options specific to hubic (Hubic). +Here are the Standard options specific to hubic (Hubic). #### --hubic-client-id @@ -27720,7 +30388,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hubic (Hubic). +Here are the Advanced options specific to hubic (Hubic). #### --hubic-token @@ -27818,6 +30486,279 @@ The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. +# Internet Archive + +The Internet Archive backend utilizes Items on [archive.org](https://archive.org/) + +Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses. + +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`. + +Once you have made a remote (see the provider specific section above) +you can use it like this: + +Unlike S3, listing up all items uploaded by you isn't supported. + +Make a new item + + rclone mkdir remote:item + +List the contents of a item + + rclone ls remote:item + +Sync `/home/local/directory` to the remote item, deleting any excess +files in the item. + + rclone sync -i /home/local/directory remote:item + +## Notes +Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. +The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior. + +You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key. +By making it wait, rclone can do normal file comparison. +Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue. + +## About metadata +This backend supports setting, updating and reading metadata of each file. +The metadata will appear as file metadata on Internet Archive. +However, some fields are reserved by both Internet Archive and rclone. + +The following are reserved by Internet Archive: +- `name` +- `source` +- `size` +- `md5` +- `crc32` +- `sha1` +- `format` +- `old_version` +- `viruscheck` + +Trying to set values to these keys is ignored with a warning. +Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime. + +rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request. + +If there are multiple values for a key, only the first one is returned. +This is a limitation of rclone, that supports one value per one key. +It can be triggered when you did a server-side copy. + +Reading metadata will also provide custom (non-standard nor reserved) ones. + +## Configuration + +Here is an example of making an internetarchive configuration. +Most applies to the other providers as well, any differences are described [below](#providers). + +First run + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +XX / InternetArchive Items + \ (internetarchive) +Storage> internetarchive +Option access_key_id. +IAS3 Access Key. +Leave blank for anonymous access. +You can find one here: https://archive.org/account/s3.php +Enter a value. Press Enter to leave empty. +access_key_id> XXXX +Option secret_access_key. +IAS3 Secret Key (password). +Leave blank for anonymous access. +Enter a value. Press Enter to leave empty. +secret_access_key> XXXX +Edit advanced config? +y) Yes +n) No (default) +y/n> y +Option endpoint. +IAS3 Endpoint. +Leave blank for default value. +Enter a string value. Press Enter for the default (https://s3.us.archive.org). +endpoint> +Option front_endpoint. +Host of InternetArchive Frontend. +Leave blank for default value. +Enter a string value. Press Enter for the default (https://archive.org). +front_endpoint> +Option disable_checksum. +Don't store MD5 checksum with object metadata. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against checksum. +This is great for data integrity checking but can cause long delays for +large files to start uploading. +Enter a boolean value (true or false). Press Enter for the default (true). +disable_checksum> true +Option encoding. +The encoding for the backend. +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). +encoding> +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[remote] +type = internetarchive +access_key_id = XXXX +secret_access_key = XXXX +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + + +### Standard options + +Here are the Standard options specific to internetarchive (Internet Archive). + +#### --internetarchive-access-key-id + +IAS3 Access Key. + +Leave blank for anonymous access. +You can find one here: https://archive.org/account/s3.php + +Properties: + +- Config: access_key_id +- Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID +- Type: string +- Required: false + +#### --internetarchive-secret-access-key + +IAS3 Secret Key (password). + +Leave blank for anonymous access. + +Properties: + +- Config: secret_access_key +- Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to internetarchive (Internet Archive). + +#### --internetarchive-endpoint + +IAS3 Endpoint. + +Leave blank for default value. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT +- Type: string +- Default: "https://s3.us.archive.org" + +#### --internetarchive-front-endpoint + +Host of InternetArchive Frontend. + +Leave blank for default value. + +Properties: + +- Config: front_endpoint +- Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT +- Type: string +- Default: "https://archive.org" + +#### --internetarchive-disable-checksum + +Don't ask the server to test against MD5 checksum calculated by rclone. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against checksum. +This is great for data integrity checking but can cause long delays for +large files to start uploading. + +Properties: + +- Config: disable_checksum +- Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM +- Type: bool +- Default: true + +#### --internetarchive-wait-archive + +Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish. +Only enable if you need to be guaranteed to be reflected after write operations. +0 to disable waiting. No errors to be thrown in case of timeout. + +Properties: + +- Config: wait_archive +- Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE +- Type: Duration +- Default: 0s + +#### --internetarchive-encoding + +The encoding for the backend. + +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_INTERNETARCHIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + +### Metadata + +Metadata fields provided by Internet Archive. +If there are multiple values for a key, only the first one is returned. +This is a limitation of Rclone, that supports one value per one key. + +Owner is able to add custom keys. Metadata feature grabs all the keys including them. + +Here are the possible system metadata items for the internetarchive backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N | +| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N | +| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N | +| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N | +| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N | +| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N | +| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N | +| size | File size in bytes | decimal number | 123456 | N | +| source | The source of the file | string | original | N | +| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + + # Jottacloud Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters @@ -27890,60 +30831,83 @@ s) Set configuration password q) Quit config n/s/q> n name> remote +Option Storage. Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value +Choose a number from below, or type in your own value. [snip] XX / Jottacloud - \ "jottacloud" + \ (jottacloud) [snip] Storage> jottacloud -** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use legacy authentication?. -This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. +Edit advanced config? y) Yes n) No (default) y/n> n - -Generate a personal login token here: https://www.jottacloud.com/web/secure +Option config_type. +Select authentication type. +Choose a number from below, or type in an existing string value. +Press Enter for the default (standard). + / Standard authentication. + 1 | Use this if you're a normal Jottacloud user. + \ (standard) + / Legacy authentication. + 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + \ (legacy) + / Telia Cloud authentication. + 3 | Use this if you are using Telia Cloud. + \ (telia) + / Tele2 Cloud authentication. + 4 | Use this if you are using Tele2 Cloud. + \ (tele2) +config_type> 1 +Personal login token. +Generate here: https://www.jottacloud.com/web/secure Login Token> - -Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? - +Use a non-standard device/mountpoint? +Choosing no, the default, will let you access the storage used for the archive +section of the official Jottacloud client. If you instead want to access the +sync or the backup section, for example, you must choose yes. y) Yes -n) No +n) No (default) y/n> y -Please select the device to use. Normally this will be Jotta -Choose a number from below, or type in an existing value +Option config_device. +The device to use. In standard setup the built-in Jotta device is used, +which contains predefined mountpoints for archive, sync etc. All other devices +are treated as backup devices by the official Jottacloud client. You may create +a new by entering a unique name. +Choose a number from below, or type in your own string value. +Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta -Devices> 2 -Please select the mountpoint to user. Normally this will be Archive -Choose a number from below, or type in an existing value +config_device> 2 +Option config_mountpoint. +The mountpoint to use for the built-in device Jotta. +The standard setup is to use the Archive mountpoint. Most other mountpoints +have very limited support in rclone and should generally be avoided. +Choose a number from below, or type in an existing string value. +Press Enter for the default (Archive). 1 > Archive - 2 > Links + 2 > Shared 3 > Sync - -Mountpoints> 1 +config_mountpoint> 1 -------------------- -[jotta] +[remote] type = jottacloud +configVersion = 1 +client_id = jottacli +client_secret = +tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} +username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive -configVersion = 1 -------------------- -y) Yes this is OK +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` + Once configured you can then use `rclone` like this, List directories in top level of your Jottacloud @@ -27960,19 +30924,27 @@ To copy a local directory to an Jottacloud directory called backup ### Devices and Mountpoints -The official Jottacloud client registers a device for each computer you install it on, -and then creates a mountpoint for each folder you select for Backup. -The web interface uses a special device called Jotta for the Archive and Sync mountpoints. +The official Jottacloud client registers a device for each computer you install +it on, and shows them in the backup section of the user interface. For each +folder you select for backup it will create a mountpoint within this device. +A built-in device called Jotta is special, and contains mountpoints Archive, +Sync and some others, used for corresponding features in official clients. -With rclone you'll want to use the Jotta/Archive device/mountpoint in most cases, however if you -want to access files uploaded by any of the official clients rclone provides the option to select -other devices and mountpoints during config. Note that uploading files is currently not supported -to other devices than Jotta. +With rclone you'll want to use the standard Jotta/Archive device/mountpoint in +most cases. However, you may for example want to access files from the sync or +backup functionality provided by the official clients, and rclone therefore +provides the option to select other devices and mountpoints during config. -The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. -These are special mountpoints with a different internal representation than the "regular" mountpoints. -Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you -are doing. +You are allowed to create new devices and mountpoints. All devices except the +built-in Jotta device are treated as backup devices by official Jottacloud +clients, and the mountpoints on them are individual backup sets. + +With the built-in Jotta device, only existing, built-in, mountpoints can be +selected. In addition to the mentioned Archive and Sync, it may contain +several other mountpoints such as: Latest, Links, Shared and Trash. All of +these are special mountpoints with a different internal representation than +the "regular" mountpoints. Rclone will only to a very limited degree support +them. Generally you should avoid these, unless you know what you are doing. ### --fast-list @@ -28050,7 +31022,7 @@ and the current usage. ### Advanced options -Here are the advanced options specific to jottacloud (Jottacloud). +Here are the Advanced options specific to jottacloud (Jottacloud). #### --jottacloud-md5-memory-limit @@ -28249,7 +31221,7 @@ as they can't be used in XML strings. ### Standard options -Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). #### --koofr-provider @@ -28336,7 +31308,7 @@ Properties: ### Advanced options -Here are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). #### --koofr-mountid @@ -28683,7 +31655,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to mailru (Mail.ru Cloud). +Here are the Standard options specific to mailru (Mail.ru Cloud). #### --mailru-user @@ -28736,7 +31708,7 @@ Properties: ### Advanced options -Here are the advanced options specific to mailru (Mail.ru Cloud). +Here are the Advanced options specific to mailru (Mail.ru Cloud). #### --mailru-speedup-file-patterns @@ -28972,6 +31944,44 @@ Use `rclone dedupe` to fix duplicated files. ### Failure to log-in +#### Object not found + +If you are connecting to your Mega remote for the first time, +to test access and syncronisation, you may receive an error such as + +``` +Failed to create file system for "my-mega-remote:": +couldn't login: Object (typically, node or user) not found +``` + +The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega) +start with the **MEGAcmd** utility. Note that this refers to +the official C++ command from https://github.com/meganz/MEGAcmd +and not the go language built command from t3rm1n4l/megacmd +that is no longer maintained. + +Follow the instructions for installing MEGAcmd and try accessing +your remote as they recommend. You can establish whether or not +you can log in using MEGAcmd, and obtain diagnostic information +to help you, and search or work with others in the forum. + +``` +MEGA CMD> login me@example.com +Password: +Fetching nodes ... +Loading transfers from local cache +Login complete as me@example.com +me@example.com:/$ +``` + +Note that some have found issues with passwords containing special +characters. If you can not log on with rclone, but MEGAcmd logs on +just fine, then consider changing your password temporarily to +pure alphanumeric characters, in case that helps. + + +#### Repeated commands blocks access + Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. @@ -29019,7 +32029,7 @@ have got the remote blocked for a while. ### Standard options -Here are the standard options specific to mega (Mega). +Here are the Standard options specific to mega (Mega). #### --mega-user @@ -29047,7 +32057,7 @@ Properties: ### Advanced options -Here are the advanced options specific to mega (Mega). +Here are the Advanced options specific to mega (Mega). #### --mega-debug @@ -29164,8 +32174,7 @@ set](https://rclone.org/overview/#restricted-characters). - Akamai NetStorage -------------------------------------------------- +# Akamai NetStorage Paths are specified as `remote:` You may put subdirectories in too, e.g. `remote:/path/to/dir`. @@ -29180,6 +32189,8 @@ See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process. +## Configuration + Here's an example of how to make a remote called `ns1`. 1. To begin the interactive configuration process, enter this command: @@ -29271,28 +32282,31 @@ y/e/d> y This remote is called `ns1` and can now be used. -### Example operations +## Example operations + Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/. -##### See contents of a directory in your project +### See contents of a directory in your project rclone lsd ns1:/974012/testing/ -##### Sync the contents local with remote +### Sync the contents local with remote rclone sync . ns1:/974012/testing/ -##### Upload local content to remote +### Upload local content to remote rclone copy notes.txt ns1:/974012/testing/ -##### Delete content on remote +### Delete content on remote rclone delete ns1:/974012/testing/notes.txt -##### Move or copy content between CP codes. +### Move or copy content between CP codes. + Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes. rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +## Features ### Symlink Support @@ -29313,7 +32327,7 @@ With NetStorage, directories can exist in one of two forms: Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. -### ListR Feature +### `--fast-list` / ListR support NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered. @@ -29325,7 +32339,7 @@ There are pros and cons of using the ListR method, refer to [rclone documentatio **Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output. -### Purge Feature +### Purge NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method. @@ -29334,7 +32348,7 @@ NetStorage remote supports the purge feature by using the "quick-delete" NetStor ### Standard options -Here are the standard options specific to netstorage (Akamai NetStorage). +Here are the Standard options specific to netstorage (Akamai NetStorage). #### --netstorage-host @@ -29377,7 +32391,7 @@ Properties: ### Advanced options -Here are the advanced options specific to netstorage (Akamai NetStorage). +Here are the Advanced options specific to netstorage (Akamai NetStorage). #### --netstorage-protocol @@ -29408,7 +32422,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -29591,7 +32605,7 @@ untrusted environment such as a CI build server. ### Standard options -Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-account @@ -29688,7 +32702,7 @@ Properties: ### Advanced options -Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-msi-object-id @@ -29955,15 +32969,15 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) ## Azure Storage Emulator Support -You can test rclone with storage emulator locally, to do this make sure azure storage emulator -installed locally and set up a new remote with `rclone config` follow instructions described in -introduction, set `use_emulator` config as `true`, you do not need to provide default account name -or key if using emulator. +You can run rclone with storage emulator (usually _azurite_). + +To do this, just set up a new remote with `rclone config` following instructions described in introduction and set `use_emulator` config as `true`. You do not need to provide default account name neither an account key. + +Also, if you want to access a storage emulator instance running on a different machine, you can override _Endpoint_ parameter in advanced settings, setting it to `http(s)://:/devstoreaccount1` (e.g. `http://10.254.2.5:10000/devstoreaccount1`). # Microsoft OneDrive @@ -30082,24 +33096,45 @@ To copy a local directory to an OneDrive directory called backup ### Getting your own Client ID and Key -You can use your own Client ID if the default (`client_id` left blank) -one doesn't work for you or you see lots of throttling. The default -Client ID and Key is shared by all rclone users when performing -requests. +rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config. +The default Client ID and Key are shared by all rclone users when performing requests. -If you are having problems with them (E.g., seeing a lot of throttling), you can get your own -Client ID and Key by following the steps below: +You may choose to create and use your own Client ID, in case the default one does not work well for you. +For example, you might see throtting. + +#### Creating Client ID for OneDrive Personal + +To create your own Client ID, please follow these steps: 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. -5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read`, and optionally `Sites.Read.All` (see below). Once selected click `Add permissions` at the bottom. +5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom. Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. -The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to set `disable_site_permission` option to true in the advanced options. +The access_scopes option allows you to configure the permissions requested by rclone. +See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes. + +The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options. + +#### Creating Client ID for OneDrive Business + +The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. +A common error is that the publisher of the App is not verified. + +You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below. + +1. Make sure to create the App with your business account. +2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type aftering creating the App. +3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization. +4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`. +5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`. + +Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). + ### Modification time and hashes @@ -30158,7 +33193,7 @@ the OneDrive website. ### Standard options -Here are the standard options specific to onedrive (Microsoft OneDrive). +Here are the Standard options specific to onedrive (Microsoft OneDrive). #### --onedrive-client-id @@ -30208,7 +33243,7 @@ Properties: ### Advanced options -Here are the advanced options specific to onedrive (Microsoft OneDrive). +Here are the Advanced options specific to onedrive (Microsoft OneDrive). #### --onedrive-token @@ -30300,6 +33335,28 @@ Properties: - Type: string - Required: false +#### --onedrive-access-scopes + +Set scopes to be requested by rclone. + +Choose or manually enter a custom space separated list with all scopes, that rclone should request. + + +Properties: + +- Config: access_scopes +- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES +- Type: SpaceSepList +- Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access +- Examples: + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true + #### --onedrive-disable-site-permission Disable the request for Sites.Read.All permission. @@ -30591,7 +33648,7 @@ are converted you will no longer need the ignore options above. It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to -mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use +mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the `--backup-dir ` command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into @@ -30611,7 +33668,7 @@ Description: Using application 'rclone' is currently not supported for your orga This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. -However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint +However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint ### invalid\_grant (AADSTS50076) #### @@ -30731,7 +33788,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to opendrive (OpenDrive). +Here are the Standard options specific to opendrive (OpenDrive). #### --opendrive-username @@ -30759,7 +33816,7 @@ Properties: ### Advanced options -Here are the advanced options specific to opendrive (OpenDrive). +Here are the Advanced options specific to opendrive (OpenDrive). #### --opendrive-encoding @@ -30806,8 +33863,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # QingStor @@ -30950,7 +34006,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to qingstor (QingCloud Object Storage). +Here are the Standard options specific to qingstor (QingCloud Object Storage). #### --qingstor-env-auth @@ -31034,7 +34090,7 @@ Properties: ### Advanced options -Here are the advanced options specific to qingstor (QingCloud Object Storage). +Here are the Advanced options specific to qingstor (QingCloud Object Storage). #### --qingstor-connection-retries @@ -31124,8 +34180,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Sia @@ -31255,7 +34310,7 @@ rclone copy /home/source mySia:backup ### Standard options -Here are the standard options specific to sia (Sia Decentralized Cloud). +Here are the Standard options specific to sia (Sia Decentralized Cloud). #### --sia-api-url @@ -31288,7 +34343,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sia (Sia Decentralized Cloud). +Here are the Advanced options specific to sia (Sia Decentralized Cloud). #### --sia-user-agent @@ -31571,7 +34626,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-env-auth @@ -31811,7 +34866,7 @@ Properties: ### Advanced options -Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-leave-parts-on-error @@ -32025,6 +35080,13 @@ Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. `rclone cleanup` can be used to empty the trash. +### Emptying the trash + +Due to an API limitation, the `rclone cleanup` command will only work if you +set your username and password in the advanced options for this backend. +Since we generally want to avoid storing user passwords in the rclone config +file, we advise you to only set this up if you need the `rclone cleanup` command to work. + ### Root folder ID You can set the `root_folder_id` for rclone. This is the directory @@ -32050,7 +35112,7 @@ the `root_folder_id` in the config. ### Standard options -Here are the standard options specific to pcloud (Pcloud). +Here are the Standard options specific to pcloud (Pcloud). #### --pcloud-client-id @@ -32080,7 +35142,7 @@ Properties: ### Advanced options -Here are the advanced options specific to pcloud (Pcloud). +Here are the Advanced options specific to pcloud (Pcloud). #### --pcloud-token @@ -32164,6 +35226,34 @@ Properties: - "eapi.pcloud.com" - EU region +#### --pcloud-username + +Your pcloud username. + +This is only required when you want to use the cleanup command. Due to a bug +in the pcloud API the required API does not support OAuth authentication so +we have to rely on user password authentication for it. + +Properties: + +- Config: username +- Env Var: RCLONE_PCLOUD_USERNAME +- Type: string +- Required: false + +#### --pcloud-password + +Your pcloud password. + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: password +- Env Var: RCLONE_PCLOUD_PASSWORD +- Type: string +- Required: false + # premiumize.me @@ -32267,7 +35357,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to premiumizeme (premiumize.me). +Here are the Standard options specific to premiumizeme (premiumize.me). #### --premiumizeme-api-key @@ -32285,7 +35375,7 @@ Properties: ### Advanced options -Here are the advanced options specific to premiumizeme (premiumize.me). +Here are the Advanced options specific to premiumizeme (premiumize.me). #### --premiumizeme-encoding @@ -32421,7 +35511,7 @@ as they can't be used in JSON strings. ### Advanced options -Here are the advanced options specific to putio (Put.io). +Here are the Advanced options specific to putio (Put.io). #### --putio-encoding @@ -32438,6 +35528,15 @@ Properties: +## Limitations + +put.io has rate limiting. When you hit a limit, rclone automatically +retries after waiting the amount of time requested by the server. + +If you want to avoid ever hitting these limits, you may use the +`--tpslimit` flag with a low number. Note that the imposed limits +may be different for different operations, and may change over time. + # Seafile This is a backend for the [Seafile](https://www.seafile.com/) storage service: @@ -32701,7 +35800,7 @@ Versions between 6.0 and 6.3 haven't been tested and might not work properly. ### Standard options -Here are the standard options specific to seafile (seafile). +Here are the Standard options specific to seafile (seafile). #### --seafile-url @@ -32793,7 +35892,7 @@ Properties: ### Advanced options -Here are the advanced options specific to seafile (seafile). +Here are the Advanced options specific to seafile (seafile). #### --seafile-create-library @@ -32829,7 +35928,7 @@ Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). The SFTP backend can be used with a number of different providers: -- C14 +- Hetzner Storage Box - rsync.net @@ -32844,9 +35943,12 @@ would list the home directory of the user cofigured in the rclone remote config directory for remote machine (i.e. `/`) Note that some SFTP servers will need the leading / - Synology is a -good example of this. rsync.net, on the other hand, requires users to +good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /. +Note that by default rclone will try to execute shell commands on +the server, see [shell access considerations](#shell-access-considerations). + ## Configuration Here is an example of making an SFTP configuration. First run @@ -32865,7 +35967,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / SSH/SFTP Connection +XX / SSH/SFTP \ "sftp" [snip] Storage> sftp @@ -33063,6 +36165,116 @@ And then at the end of the session These commands can be used in scripts of course. +### Shell access + +Some functionality of the SFTP backend relies on remote shell access, +and the possibility to execute commands. This includes [checksum](#checksum), +and in some cases also [about](#about-command). The shell commands that +must be executed may be different on different type of shells, and also +quoting/escaping of file path arguments containing special characters may +be different. Rclone therefore needs to know what type of shell it is, +and if shell access is available at all. + +Most servers run on some version of Unix, and then a basic Unix shell can +be assumed, without further distinction. Windows 10, Server 2019, and later +can also run a SSH server, which is a port of OpenSSH (see official +[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). On a Windows server the shell handling is different: Although it can also +be set up to use a Unix type shell, e.g. Cygwin bash, the default is to +use Windows Command Prompt (cmd.exe), and PowerShell is a recommended +alternative. All of these have bahave differently, which rclone must handle. + +Rclone tries to auto-detect what type of shell is used on the server, +first time you access the SFTP remote. If a remote shell session is +successfully created, it will look for indications that it is CMD or +PowerShell, with fall-back to Unix if not something else is detected. +If unable to even create a remote shell session, then shell command +execution will be disabled entirely. The result is stored in the SFTP +remote configuration, in option `shell_type`, so that the auto-detection +only have to be performed once. If you manually set a value for this +option before first run, the auto-detection will be skipped, and if +you set a different value later this will override any existing. +Value `none` can be set to avoid any attempts at executing shell +commands, e.g. if this is not allowed on the server. + +When the server is [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/), +the rclone SFTP remote will detect this as a Unix type shell - even +if it is running on Windows. This server does not actually have a shell, +but it accepts input commands matching the specific ones that the +SFTP backend relies on for Unix shells, e.g. `md5sum` and `df`. Also +it handles the string escape rules used for Unix shell. Treating it +as a Unix type shell from a SFTP remote will therefore always be +correct, and support all features. + +#### Shell access considerations + +The shell type auto-detection logic, described above, means that +by default rclone will try to run a shell command the first time +a new sftp remote is accessed. If you configure a sftp remote +without a config file, e.g. an [on the fly](https://rclone.org/docs/#backend-path-to-dir]) +remote, rclone will have nowhere to store the result, and it +will re-run the command on every access. To avoid this you should +explicitely set the `shell_type` option to the correct value, +or to `none` if you want to prevent rclone from executing any +remote shell commands. + +It is also important to note that, since the shell type decides +how quoting and escaping of file paths used as command-line arguments +are performed, configuring the wrong shell type may leave you exposed +to command injection exploits. Make sure to confirm the auto-detected +shell type, or explicitely set the shell type you know is correct, +or disable shell access until you know. + +### Checksum + +SFTP does not natively support checksums (file hash), but rclone +is able to use checksumming if the same login has shell access, +and can execute remote commands. If there is a command that can +calculate compatible checksums on the remote system, Rclone can +then be configured to execute this whenever a checksum is needed, +and read back the results. Currently MD5 and SHA-1 are supported. + +Normally this requires an external utility being available on +the server. By default rclone will try commands `md5sum`, `md5` +and `rclone md5sum` for MD5 checksums, and the first one found usable +will be picked. Same with `sha1sum`, `sha1` and `rclone sha1sum` +commands for SHA-1 checksums. These utilities normally need to +be in the remote's PATH to be found. + +In some cases the shell itself is capable of calculating checksums. +PowerShell is an example of such a shell. If rclone detects that the +remote shell is PowerShell, which means it most probably is a +Windows OpenSSH server, rclone will use a predefined script block +to produce the checksums when no external checksum commands are found +(see [shell access](#shell-access)). This assumes PowerShell version +4.0 or newer. + +The options `md5sum_command` and `sha1_command` can be used to customize +the command to be executed for calculation of checksums. You can for +example set a specific path to where md5sum and sha1sum executables +are located, or use them to specify some other tools that print checksums +in compatible format. The value can include command-line arguments, +or even shell script blocks as with PowerShell. Rclone has subcommands +[md5sum](https://rclone.org/commands/rclone_md5sum/) and [sha1sum](https://rclone.org/commands/rclone_sha1sum/) +that use compatible format, which means if you have an rclone executable +on the server it can be used. As mentioned above, they will be automatically +picked up if found in PATH, but if not you can set something like +`/path/to/rclone md5sum` as the value of option `md5sum_command` to +make sure a specific executable is used. + +Remote checksumming is recommended and enabled by default. First time +rclone is using a SFTP remote, if options `md5sum_command` or `sha1_command` +are not set, it will check if any of the default commands for each of them, +as described above, can be used. The result will be saved in the remote +configuration, so next time it will use the same. Value `none` +will be set if none of the default commands could be used for a specific +algorithm, and this algorithm will not be supported by the remote. + +Disabling the checksumming may be required if you are connecting to SFTP servers +which are not under your control, and to which the execution of remote shell +commands is prohibited. Set the configuration option `disable_hashcheck` +to `true` to disable checksumming entirely, or set `shell_type` to `none` +to disable all functionality based on remote shell command execution. + ### Modified time Modified times are stored on the server to 1 second precision. @@ -33074,10 +36286,26 @@ upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option `set_modtime = false` in your RClone backend configuration to disable this behaviour. +### About command + +The `about` command returns the total space, free space, and used +space on the remote for the disk of the specified path on the remote or, +if not set, the disk of the root on the remote. + +SFTP usually supports the [about](https://rclone.org/commands/rclone_about/) command, but +it depends on the server. If the server implements the vendor-specific +VFS statistics extension, which is normally the case with OpenSSH instances, +it will be used. If not, but the same login has access to a Unix shell, +where the `df` command is available (e.g. in the remote's PATH), then +this will be used instead. If the server shell is PowerShell, probably +with a Windows OpenSSH server, rclone will use a built-in shell command +(see [shell access](#shell-access)). If none of the above is applicable, +`about` will fail. + ### Standard options -Here are the standard options specific to sftp (SSH/SFTP Connection). +Here are the Standard options specific to sftp (SSH/SFTP). #### --sftp-host @@ -33203,7 +36431,7 @@ Properties: #### --sftp-use-insecure-cipher -Enable the use of insecure ciphers and key exchange methods. +Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: @@ -33243,7 +36471,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sftp (SSH/SFTP Connection). +Here are the Advanced options specific to sftp (SSH/SFTP). #### --sftp-known-hosts-file @@ -33281,16 +36509,16 @@ Properties: #### --sftp-path-override -Override path used by SSH connection. +Override path used by SSH shell commands. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. -Shared folders can be found in directories representing volumes +E.g. if shared folders can be found in directories representing volumes: rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory -Home directory can be found in a shared folder called "home" +E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory @@ -33312,6 +36540,28 @@ Properties: - Type: bool - Default: true +#### --sftp-shell-type + +The type of SSH shell on remote server, if any. + +Leave blank for autodetect. + +Properties: + +- Config: shell_type +- Env Var: RCLONE_SFTP_SHELL_TYPE +- Type: string +- Required: false +- Examples: + - "none" + - No shell access + - "unix" + - Unix shell + - "powershell" + - PowerShell + - "cmd" + - Windows Command Prompt + #### --sftp-md5sum-command The command used to read md5 hashes. @@ -33452,29 +36702,82 @@ Properties: - Type: Duration - Default: 1m0s +#### --sftp-chunk-size + +Upload and download chunk size. + +This controls the maximum packet size used in the SFTP protocol. The +RFC limits this to 32768 bytes (32k), however a lot of servers +support larger sizes and setting it larger will increase transfer +speed dramatically on high latency links. + +Only use a setting higher than 32k if you always connect to the same +server or after sufficiently broad testing. + +For example using the value of 252k with OpenSSH works well with its +maximum packet size of 256k. + +If you get the error "failed to send packet header: EOF" when copying +a large file, try lowering this number. + + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_SFTP_CHUNK_SIZE +- Type: SizeSuffix +- Default: 32Ki + +#### --sftp-concurrency + +The maximum number of outstanding requests for one file + +This controls the maximum number of outstanding requests for one file. +Increasing it will increase throughput on high latency links at the +cost of using more memory. + + +Properties: + +- Config: concurrency +- Env Var: RCLONE_SFTP_CONCURRENCY +- Type: int +- Default: 64 + +#### --sftp-set-env + +Environment variables to pass to sftp and commands + +Set environment variables in the form: + + VAR=value + +to be passed to the sftp client and to any commands run (eg md5sum). + +Pass multiple variables space separated, eg + + VAR1=value VAR2=value + +and pass variables with spaces in in quotes, eg + + "VAR3=value with space" "VAR4=value with space" VAR5=nospacehere + + + +Properties: + +- Config: set_env +- Env Var: RCLONE_SFTP_SET_ENV +- Type: SpaceSepList +- Default: + ## Limitations -SFTP supports checksums if the same login has shell access and `md5sum` -or `sha1sum` as well as `echo` are in the remote's PATH. -This remote checksumming (file hashing) is recommended and enabled by default. -Disabling the checksumming may be required if you are connecting to SFTP servers -which are not under your control, and to which the execution of remote commands -is prohibited. Set the configuration option `disable_hashcheck` to `true` to -disable checksumming. - -SFTP also supports `about` if the same login has shell -access and `df` are in the remote's PATH. `about` will -return the total space, free space, and used space on the remote -for the disk of the specified path on the remote or, if not set, -the disk of the root on the remote. -`about` will fail if it does not have shell -access or if `df` is not in the remote's PATH. - -Note that some SFTP servers (e.g. Synology) the paths are different for -SSH and SFTP so the hashes can't be calculated properly. For them -using `disable_hashcheck` is a good idea. +On some SFTP servers (e.g. Synology) the paths are different +for SSH and SFTP so the hashes can't be calculated properly. +For them using `disable_hashcheck` is a good idea. The only ssh agent supported under Windows is Putty's pageant. @@ -33489,23 +36792,22 @@ SFTP isn't supported under plan9 until [this issue](https://github.com/pkg/sftp/issues/156) is fixed. Note that since SFTP isn't HTTP based the following flags don't work -with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` +with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`. Note that `--timeout` and `--contimeout` are both supported. - -## C14 {#c14} - -C14 is supported through the SFTP backend. - -See [C14's documentation](https://www.online.net/en/storage/c14-cold-storage) - ## rsync.net {#rsync-net} rsync.net is supported through the SFTP backend. See [rsync.net's documentation of rclone examples](https://www.rsync.net/products/rclone.html). +## Hetzner Storage Box {#hetzner-storage-box} + +Hetzner Storage Boxes are supported through the SFTP backend on port 23. + +See [Hetzner's documentation for details](https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone) + # Storj [Storj](https://storj.io) is an encrypted, secure, and @@ -33718,7 +37020,7 @@ y/e/d> y ### Standard options -Here are the standard options specific to storj (Storj Decentralized Cloud Storage). +Here are the Standard options specific to storj (Storj Decentralized Cloud Storage). #### --storj-provider @@ -33918,8 +37220,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) ## Known issues @@ -34047,7 +37348,7 @@ deleted straight away. ### Standard options -Here are the standard options specific to sugarsync (Sugarsync). +Here are the Standard options specific to sugarsync (Sugarsync). #### --sugarsync-app-id @@ -34102,7 +37403,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sugarsync (Sugarsync). +Here are the Advanced options specific to sugarsync (Sugarsync). #### --sugarsync-refresh-token @@ -34204,8 +37505,7 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) -See [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Tardigrade @@ -34310,7 +37610,7 @@ as they can't be used in XML strings. ### Standard options -Here are the standard options specific to uptobox (Uptobox). +Here are the Standard options specific to uptobox (Uptobox). #### --uptobox-access-token @@ -34327,7 +37627,7 @@ Properties: ### Advanced options -Here are the advanced options specific to uptobox (Uptobox). +Here are the Advanced options specific to uptobox (Uptobox). #### --uptobox-encoding @@ -34522,7 +37822,7 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t ### Standard options -Here are the standard options specific to union (Union merges the contents of several upstream fs). +Here are the Standard options specific to union (Union merges the contents of several upstream fs). #### --union-upstreams @@ -34583,6 +37883,30 @@ Properties: - Type: int - Default: 120 +### Advanced options + +Here are the Advanced options specific to union (Union merges the contents of several upstream fs). + +#### --union-min-free-space + +Minimum viable free space for lfs/eplfs policies. + +If a remote has less than this much free space then it won't be +considered for use in lfs or eplfs policies. + +Properties: + +- Config: min_free_space +- Env Var: RCLONE_UNION_MIN_FREE_SPACE +- Type: SizeSuffix +- Default: 1Gi + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + # WebDAV @@ -34613,7 +37937,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Webdav +XX / WebDAV \ "webdav" [snip] Storage> webdav @@ -34622,7 +37946,7 @@ Choose a number from below, or type in your own value 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ -Name of the Webdav site/service/software you are using +Name of the WebDAV site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" @@ -34692,7 +38016,7 @@ with them. ### Standard options -Here are the standard options specific to webdav (Webdav). +Here are the Standard options specific to webdav (WebDAV). #### --webdav-url @@ -34709,7 +38033,7 @@ Properties: #### --webdav-vendor -Name of the Webdav site/service/software you are using. +Name of the WebDAV site/service/software you are using. Properties: @@ -34768,7 +38092,7 @@ Properties: ### Advanced options -Here are the advanced options specific to webdav (Webdav). +Here are the Advanced options specific to webdav (WebDAV). #### --webdav-bearer-token-command @@ -35108,7 +38432,7 @@ as they can't be used in JSON strings. ### Standard options -Here are the standard options specific to yandex (Yandex Disk). +Here are the Standard options specific to yandex (Yandex Disk). #### --yandex-client-id @@ -35138,7 +38462,7 @@ Properties: ### Advanced options -Here are the advanced options specific to yandex (Yandex Disk). +Here are the Advanced options specific to yandex (Yandex Disk). #### --yandex-token @@ -35346,7 +38670,7 @@ from filenames during upload. ### Standard options -Here are the standard options specific to zoho (Zoho). +Here are the Standard options specific to zoho (Zoho). #### --zoho-client-id @@ -35395,12 +38719,16 @@ Properties: - Europe - "in" - India + - "jp" + - Japan + - "com.cn" + - China - "com.au" - Australia ### Advanced options -Here are the advanced options specific to zoho (Zoho). +Here are the Advanced options specific to zoho (Zoho). #### --zoho-token @@ -35454,6 +38782,18 @@ Properties: +## Setting up your own client_id + +For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps. + +1. Log in to the [Zoho API Console](https://api-console.zoho.com) + +2. Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL `http://localhost:53682/`. + +3. Once the client is created, you can go to the settings tab and enable it in other regions. + +The client id and client secret can now be used with rclone. + # Local Filesystem Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so @@ -35778,7 +39118,7 @@ where it isn't supported (e.g. Windows) it will be ignored. ### Advanced options -Here are the advanced options specific to local (Local Disk). +Here are the Advanced options specific to local (Local Disk). #### --local-nounc @@ -35788,8 +39128,8 @@ Properties: - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC -- Type: string -- Required: false +- Type: bool +- Default: false - Examples: - "true" - Disables long file names. @@ -36014,6 +39354,31 @@ Properties: - Type: MultiEncoder - Default: Slash,Dot +### Metadata + +Depending on which OS is in use the local backend may return only some +of the system metadata. Setting system metadata is supported on all +OSes but setting user metadata is only supported on linux, freebsd, +netbsd, macOS and Solaris. It is **not** supported on Windows yet +([see pkg/attrs#47](https://github.com/pkg/xattr/issues/47)). + +User metadata is stored as extended attributes (which may not be +supported by all file systems) under the "user.*" prefix. + +Here are the possible system metadata items for the local backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| atime | Time of last access | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N | +| btime | Time of file birth (creation) | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N | +| gid | Group ID of owner | decimal number | 500 | N | +| mode | File type and mode | octal, unix style | 0100664 | N | +| mtime | Time of last modification | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N | +| rdev | Device ID (if special file) | hexadecimal | 1abc | N | +| uid | User ID of owner | decimal number | 500 | N | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the local backend. @@ -36024,7 +39389,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](https://rclone.org/commands/rclone_backend/) for more +See the [backend](https://rclone.org/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -36048,6 +39413,207 @@ Options: # Changelog +## v1.59.0 - 2022-07-09 + +[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0) + +* New backends + * [Combine](/combine) multiple remotes in one directory tree (Nick Craig-Wood) + * [Hidrive](https://rclone.org/hidrive/) (Ovidiu Victor Tatar) + * [Internet Archive](https://rclone.org/internetarchive/) (Lesmiscore (Naoya Ozaki)) + * New S3 providers + * [ArvanCloud AOS](https://rclone.org/s3/#arvan-cloud) (ehsantdy) + * [Cloudflare R2](https://rclone.org/s3/#cloudflare-r2) (Nick Craig-Wood) + * [Huawei OBS](https://rclone.org/s3/#huawei-obs) (m00594701) + * [IDrive e2](https://rclone.org/s3/#idrive-e2) (vyloy) +* New commands + * [test makefile](https://rclone.org/commands/rclone_test_makefile/): Create a single file for testing (Nick Craig-Wood) +* New Features + * [Metadata framework](https://rclone.org/docs/#metadata) to read and write system and user metadata on backends (Nick Craig-Wood) + * Implemented initially for `local`, `s3` and `internetarchive` backends + * `--metadata`/`-M` flag to control whether metadata is copied + * `--metadata-set` flag to specify metadata for uploads + * Thanks to [Manz Solutions](https://manz-solutions.at/) for sponsoring this work. + * build + * Update to go1.18 and make go1.16 the minimum required version (Nick Craig-Wood) + * Update android go build to 1.18.x and NDK to 23.1.7779620 (Nick Craig-Wood) + * All windows binaries now no longer CGO (Nick Craig-Wood) + * Add `linux/arm/v6` to docker images (Nick Craig-Wood) + * A huge number of fixes found with [staticcheck](https://staticcheck.io/) (albertony) + * Configurable version suffix independent of version number (albertony) + * check: Implement `--no-traverse` and `--no-unicode-normalization` (Nick Craig-Wood) + * config: Readability improvements (albertony) + * copyurl: Add `--header-filename` to honor the HTTP header filename directive (J-P Treen) + * filter: Allow multiple `--exclude-if-present` flags (albertony) + * fshttp: Add `--disable-http-keep-alives` to disable HTTP Keep Alives (Nick Craig-Wood) + * install.sh + * Set the modes on the files and/or directories on macOS (Michael C Tiernan - MIT-Research Computing Project) + * Pre verify sudo authorization `-v` before calling curl. (Michael C Tiernan - MIT-Research Computing Project) + * lib/encoder: Add Semicolon encoding (Nick Craig-Wood) + * lsf: Add metadata support with `M` flag (Nick Craig-Wood) + * lsjson: Add `--metadata`/`-M` flag (Nick Craig-Wood) + * ncdu + * Implement multi selection (CrossR) + * Replace termbox with tcell's termbox wrapper (eNV25) + * Display correct path in delete confirmation dialog (Roberto Ricci) + * operations + * Speed up hash checking by aborting the other hash if first returns nothing (Nick Craig-Wood) + * Use correct src/dst in some log messages (zzr93) + * rcat: Check checksums by default like copy does (Nick Craig-Wood) + * selfupdate: Replace deprecated `x/crypto/openpgp` package with `ProtonMail/go-crypto` (albertony) + * serve ftp: Check `--passive-port` arguments are correct (Nick Craig-Wood) + * size: Warn about inaccurate results when objects with unknown size (albertony) + * sync: Overlap check is now filter-sensitive so `--backup-dir` can be in the root provided it is filtered (Nick) + * test info: Check file name lengths using 1,2,3,4 byte unicode characters (Nick Craig-Wood) + * test makefile(s): `--sparse`, `--zero`, `--pattern`, `--ascii`, `--chargen` flags to control file contents (Nick Craig-Wood) + * Make sure we call the `Shutdown` method on backends (Martin Czygan) +* Bug Fixes + * accounting: Fix unknown length file transfers counting 3 transfers each (buda) + * ncdu: Fix issue where dir size is summed when file sizes are -1 (albertony) + * sync/copy/move + * Fix `--fast-list` `--create-empty-src-dirs` and `--exclude` (Nick Craig-Wood) + * Fix `--max-duration` and `--cutoff-mode soft` (Nick Craig-Wood) + * Fix fs cache unpin (Martin Czygan) + * Set proper exit code for errors that are not low-level retried (e.g. size/timestamp changing) (albertony) +* Mount + * Support `windows/arm64` (may still be problems - see [#5828](https://github.com/rclone/rclone/issues/5828)) (Nick Craig-Wood) + * Log IO errors at ERROR level (Nick Craig-Wood) + * Ignore `_netdev` mount argument (Hugal31) +* VFS + * Add `--vfs-fast-fingerprint` for less accurate but faster fingerprints (Nick Craig-Wood) + * Add `--vfs-disk-space-total-size` option to manually set the total disk space (Claudio Maradonna) + * vfscache: Fix fatal error: sync: unlock of unlocked mutex error (Nick Craig-Wood) +* Local + * Fix parsing of `--local-nounc` flag (Nick Craig-Wood) + * Add Metadata support (Nick Craig-Wood) +* Crypt + * Support metadata (Nick Craig-Wood) +* Azure Blob + * Calculate Chunksize/blocksize to stay below maxUploadParts (Leroy van Logchem) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) + * Case insensitive access tier (Rob Pickerill) + * Allow remote emulator (azurite) (Lorenzo Maiorfi) +* B2 + * Add `--b2-version-at` flag to show file versions at time specified (SwazRGB) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) +* Chunker + * Mark as not supporting metadata (Nick Craig-Wood) +* Compress + * Support metadata (Nick Craig-Wood) +* Drive + * Make `backend config -o config` add a combined `AllDrives:` remote (Nick Craig-Wood) + * Make `--drive-shared-with-me` work with shared drives (Nick Craig-Wood) + * Add `--drive-resource-key` for accessing link-shared files (Nick Craig-Wood) + * Add backend commands `exportformats` and `importformats` for debugging (Nick Craig-Wood) + * Fix 404 errors on copy/server side copy objects from public folder (Nick Craig-Wood) + * Update Internal OAuth consent screen docs (Phil Shackleton) + * Moved `root_folder_id` to advanced section (Abhiraj) +* Dropbox + * Migrate from deprecated api (m8rge) + * Add logs to show when poll interval limits are exceeded (Nick Craig-Wood) + * Fix nil pointer exception on dropbox impersonate user not found (Nick Craig-Wood) +* Fichier + * Parse api error codes and them accordingly (buengese) +* FTP + * Add support for `disable_utf8` option (Jason Zheng) + * Revert to upstream `github.com/jlaffaye/ftp` from our fork (Nick Craig-Wood) +* Google Cloud Storage + * Add `--gcs-no-check-bucket` to minimise transactions and perms (Nick Gooding) + * Add `--gcs-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood) + * by default these will be downloaded compressed (which previously failed) +* Hasher + * Support metadata (Nick Craig-Wood) +* HTTP + * Fix missing response when using custom auth handler (albertony) +* Jottacloud + * Add support for upload to custom device and mountpoint (albertony) + * Always store username in config and use it to avoid initial API request (albertony) + * Fix issue with server-side copy when destination is in trash (albertony) + * Fix listing output of remote with special characters (albertony) +* Mailru + * Fix timeout by using int instead of time.Duration for keeping number of seconds (albertony) +* Mega + * Document using MEGAcmd to help with login failures (Art M. Gallagher) +* Onedrive + * Implement `--poll-interval` for onedrive (Hugo Laloge) + * Add access scopes option (Sven Gerber) +* Opendrive + * Resolve lag and truncate bugs (Scott Grimes) +* Pcloud + * Fix about with no free space left (buengese) + * Fix cleanup (buengese) +* S3 + * Use PUT Object instead of presigned URLs to upload single part objects (Nick Craig-Wood) + * Backend restore command to skip non-GLACIER objects (Vincent Murphy) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) + * Retry RequestTimeout errors (Nick Craig-Wood) + * Implement reading and writing of metadata (Nick Craig-Wood) +* SFTP + * Add support for about and hashsum on windows server (albertony) + * Use vendor-specific VFS statistics extension for about if available (albertony) + * Add `--sftp-chunk-size` to control packets sizes for high latency links (Nick Craig-Wood) + * Add `--sftp-concurrency` to improve high latency transfers (Nick Craig-Wood) + * Add `--sftp-set-env` option to set environment variables (Nick Craig-Wood) + * Add Hetzner Storage Boxes to supported sftp backends (Anthrazz) +* Storj + * Fix put which lead to the file being unreadable when using mount (Erik van Velzen) +* Union + * Add `min_free_space` option for `lfs`/`eplfs` policies (Nick Craig-Wood) + * Fix uploading files to union of all bucket based remotes (Nick Craig-Wood) + * Fix get free space for remotes which don't support it (Nick Craig-Wood) + * Fix `eplus` policy to select correct entry for existing files (Nick Craig-Wood) + * Support metadata (Nick Craig-Wood) +* Uptobox + * Fix root path handling (buengese) +* WebDAV + * Add SharePoint in other specific regions support (Noah Hsu) +* Yandex + * Handle api error on server-side move (albertony) +* Zoho + * Add Japan and China regions (buengese) + +## v1.58.1 - 2022-04-29 + +[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.58.1) + +* Bug Fixes + * build: Update github.com/billziss-gh to github.com/winfsp (Nick Craig-Wood) + * filter: Fix timezone of `--min-age`/`-max-age` from UTC to local as documented (Nick Craig-Wood) + * rc/js: Correct RC method names (Sơn Trần-Nguyễn) + * docs + * Fix some links to command pages (albertony) + * Add `--multi-thread-streams` note to `--transfers`. (Zsolt Ero) +* Mount + * Fix `--devname` and fusermount: unknown option 'fsname' when mounting via rc (Nick Craig-Wood) +* VFS + * Remove wording which suggests VFS is only for mounting (Nick Craig-Wood) +* Dropbox + * Fix retries of multipart uploads with incorrect_offset error (Nick Craig-Wood) +* Google Cloud Storage + * Use the s3 pacer to speed up transactions (Nick Craig-Wood) + * pacer: Default the Google pacer to a burst of 100 to fix gcs pacing (Nick Craig-Wood) +* Jottacloud + * Fix scope in token request (albertony) +* Netstorage + * Fix unescaped HTML in documentation (Nick Craig-Wood) + * Make levels of headings consistent (Nick Craig-Wood) + * Add support contacts to netstorage doc (Nil Alexandrov) +* Onedrive + * Note that sharepoint also changes web files (.html, .aspx) (GH) +* Putio + * Handle rate limit errors (Berkan Teber) + * Fix multithread download and other ranged requests (rafma0) +* S3 + * Add ChinaMobile EOS to provider list (GuoXingbin) + * Sync providers in config description with providers (Nick Craig-Wood) +* SFTP + * Fix OpenSSH 8.8+ RSA keys incompatibility (KARBOWSKI Piotr) + * Note that Scaleway C14 is deprecating SFTP in favor of S3 (Adrien Rey-Jarthon) +* Storj + * Fix bucket creation on Move (Nick Craig-Wood) +* WebDAV + * Don't override Referer if user sets it (Nick Craig-Wood) + ## v1.58.0 - 2022-03-18 [See commits](https://github.com/rclone/rclone/compare/v1.57.0...v1.58.0) @@ -39951,7 +43517,7 @@ the node running rclone would need to have lots of bandwidth. The syncs would be incremental (on a file by file basis). -Eg +e.g. rclone sync -i drive:Folder s3:bucket @@ -40038,7 +43604,7 @@ e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy -Note that the ftp backend does not support `ftp_proxy` yet. +Note that the FTP backend does not support `ftp_proxy` yet. ### Rclone gives x509: failed to load system roots and no roots provided error ### @@ -40747,6 +44313,57 @@ put them back in again.` >}} * Vincent Murphy * ctrl-q <34975747+ctrl-q@users.noreply.github.com> * Nil Alexandrov + * GuoXingbin <101376330+guoxingbin@users.noreply.github.com> + * Berkan Teber + * Tobias Klauser + * KARBOWSKI Piotr + * GH + * rafma0 + * Adrien Rey-Jarthon + * Nick Gooding <73336146+nickgooding@users.noreply.github.com> + * Leroy van Logchem + * Zsolt Ero + * Lesmiscore + * ehsantdy + * SwazRGB <65694696+swazrgb@users.noreply.github.com> + * Mateusz Puczyński + * Michael C Tiernan - MIT-Research Computing Project + * Kaspian <34658474+KaspianDev@users.noreply.github.com> + * Werner + * Hugal31 + * Christian Galo <36752715+cgalo5758@users.noreply.github.com> + * Erik van Velzen + * Derek Battams + * SimonLiu + * Hugo Laloge + * Mr-Kanister <68117355+Mr-Kanister@users.noreply.github.com> + * Rob Pickerill + * Andrey + * Eric Wolf <19wolf@gmail.com> + * Nick + * Jason Zheng + * Matthew Vernon + * Noah Hsu + * m00594701 + * Art M. Gallagher + * Sven Gerber <49589423+svengerber@users.noreply.github.com> + * CrossR + * Maciej Radzikowski + * Scott Grimes + * Phil Shackleton <71221528+philshacks@users.noreply.github.com> + * eNV25 + * Caleb + * J-P Treen + * Martin Czygan <53705+miku@users.noreply.github.com> + * buda + * mirekphd <36706320+mirekphd@users.noreply.github.com> + * vyloy + * Anthrazz <25553648+Anthrazz@users.noreply.github.com> + * zzr93 <34027824+zzr93@users.noreply.github.com> + * Paul Norman + * Lorenzo Maiorfi + * Claudio Maradonna + * Ovidiu Victor Tatar # Contact the rclone project # diff --git a/MANUAL.txt b/MANUAL.txt index d25a312cb..d8828a065 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Mar 18, 2022 +Jul 09, 2022 Rclone syncs your files to cloud storage @@ -82,7 +82,7 @@ Features - Move files to cloud storage deleting the local after verification - Check hashes and for missing/extra files - Mount your cloud storage as a network disk -- Serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna +- Serve local or remote files over HTTP/WebDav/FTP/SFTP/DLNA - Experimental Web based GUI Supported providers @@ -98,8 +98,11 @@ S3, that work out of the box.) - Backblaze B2 - Box - Ceph +- China Mobile Ecloud Elastic Object Storage (EOS) +- Arvan Cloud Object Storage (AOS) - Citrix ShareFile - C14 +- Cloudflare R2 - DigitalOcean Spaces - Digi Storage - Dreamhost @@ -110,10 +113,14 @@ S3, that work out of the box.) - Google Drive - Google Photos - HDFS +- Hetzner Storage Box +- HiDrive - HTTP - Hubic +- Internet Archive - Jottacloud - IBM COS S3 +- IDrive e2 - Koofr - Mail.ru Cloud - Memset Memstore @@ -151,6 +158,19 @@ S3, that work out of the box.) - Zoho WorkDrive - The local filesystem +Virtual providers + +These backends adapt or modify other storage providers: + +- Alias: Rename existing remotes +- Cache: Cache remotes (DEPRECATED) +- Chunker: Split large files +- Combine: Combine multiple remotes into a directory tree +- Compress: Compress files +- Crypt: Encrypt files +- Hasher: Hash files +- Union: Join multiple remotes to work together + Links - Home page @@ -181,11 +201,11 @@ Script installation To install rclone on Linux/macOS/BSD systems, run: - curl https://rclone.org/install.sh | sudo bash + sudo -v ; curl https://rclone.org/install.sh | sudo bash For beta installation, run: - curl https://rclone.org/install.sh | sudo bash -s beta + sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta Note that this script checks the version of rclone installed first and won't re-download if not needed. @@ -257,7 +277,7 @@ When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone, a pop-up will appear saying: - “rclone” cannot be opened because the developer cannot be verified. + "rclone" cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. The simplest fix is to run @@ -346,33 +366,75 @@ Here are some commands tested on an Ubuntu 18.04.3 host: Install from source -Make sure you have at least Go go1.15 installed. Download go if -necessary. The latest release is recommended. Then +Make sure you have git and Go installed. Go version 1.16 or newer is +required, latest release is recommended. You can get it from your +package manager, or download it from golang.org/dl. Then you can run the +following: git clone https://github.com/rclone/rclone.git cd rclone go build - # If on macOS and mount is wanted, instead run: make GOTAGS=cmount - ./rclone version -This will leave you a checked out version of rclone you can modify and -send pull requests with. If you use make instead of go build then the -rclone build will have the correct version information in it. +This will check out the rclone source in subfolder rclone, which you can +later modify and send pull requests with. Then it will build the rclone +executable in the same folder. As an initial check you can now run +./rclone version (.\rclone version on Windows). -You can also build the latest stable rclone with: +Note that on macOS and Windows the mount command will not be available +unless you specify additional build tag cmount. + + go build -tags cmount + +This assumes you have a GCC compatible C compiler (GCC or Clang) in your +PATH, as it uses cgo. But on Windows, the cgofuse library that the +cmount implementation is based on, also supports building without cgo, +i.e. by setting environment variable CGO_ENABLED to value 0 (static +linking). This is how the official Windows release of rclone is being +built, starting with version 1.59. It is still possible to build with +cgo on Windows as well, by using the MinGW port of GCC, e.g. by +installing it in a MSYS2 distribution (make sure you install it in the +classic mingw64 subsystem, the ucrt64 version is not compatible). + +Additionally, on Windows, you must install the third party utility +WinFsp, with the "Developer" feature selected. If building with cgo, you +must also set environment variable CPATH pointing to the fuse include +directory within the WinFsp installation (normally +C:\Program Files (x86)\WinFsp\inc\fuse). + +You may also add arguments -ldflags -s (with or without -tags cmount), +to omit symbol table and debug information, making the executable file +smaller, and -trimpath to remove references to local file system paths. +This is how the official rclone releases are built. + + go build -trimpath -ldflags -s -tags cmount + +Instead of executing the go build command directly, you can run it via +the Makefile, which also sets version information and copies the +resulting rclone executable into your GOPATH bin folder +($(go env GOPATH)/bin, which corresponds to ~/go/bin/rclone by default). + + make + +To include mount command on macOS and Windows with Makefile build: + + make GOTAGS=cmount + +As an alternative you can download the source, build and install rclone +in one operation, as a regular Go package. The source will be stored it +in the Go module cache, and the resulting executable will be in your +GOPATH bin folder ($(go env GOPATH)/bin, which corresponds to +~/go/bin/rclone by default). + +With Go version 1.17 or newer: + + go install github.com/rclone/rclone@latest + +With Go versions older than 1.17 (do not use the -u flag, it causes Go +to try to update the dependencies that rclone uses and sometimes these +don't work with the current version): go get github.com/rclone/rclone -or the latest version (equivalent to the beta) with - - go get github.com/rclone/rclone@master - -These will build the binary in $(go env GOPATH)/bin (~/go/bin/rclone by -default) after downloading the source to the go module cache. Note - do -not use the -u flag here. This causes go to try to update the -dependencies that rclone uses and sometimes these don't work with the -current version of rclone. - Installation with Ansible This can be done with Stefan Weichinger's ansible role. @@ -497,7 +559,7 @@ configured to run at startup. Mount command built-in service integration -For mount commands, Rclone has a built-in Windows service integration +For mount commands, rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service (requires administrative privileges). @@ -586,6 +648,7 @@ See the following for detailed instructions for - Chunker - transparently splits large files for other remotes - Citrix ShareFile - Compress +- Combine - Crypt - to encrypt other remotes - DigitalOcean Spaces - Digi Storage @@ -597,8 +660,10 @@ See the following for detailed instructions for - Google Photos - Hasher - to handle checksums for other remotes - HDFS +- HiDrive - HTTP - Hubic +- Internet Archive - Jottacloud - Koofr - Mail.ru Cloud @@ -696,11 +761,16 @@ Synopsis Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification -time or MD5SUM. Doesn't delete files from the destination. +time or MD5SUM. Doesn't delete files from the destination. If you want +to also delete files from destination, to make it match source, use the +sync command instead. Note that it is always the contents of the directory that is synced, not -the directory so when source:path is a directory, it's the contents of -source:path that are copied, not the directory name and contents. +the directory itself. So when source:path is a directory, it's the +contents of source:path that are copied, not the directory name and +contents. + +To copy single files, use the copyto command instead. If dest:path doesn't exist, it is created and the source:path contents go there. @@ -767,7 +837,8 @@ Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate -objects, see below). +objects, see below). If you don't want to delete files from destination, +use the copy command instead. Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. @@ -779,9 +850,9 @@ errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled. It is always the contents of the directory that is synced, not the -directory so when source:path is a directory, it's the contents of -source:path that are copied, not the directory name and contents. See -extended explanation in the copy command above if unsure. +directory itself. So when source:path is a directory, it's the contents +of source:path that are copied, not the directory name and contents. See +extended explanation in the copy command if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. @@ -815,6 +886,8 @@ Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation. +To move single files, use the moveto command instead. + If no filters are in use and if possible this will server-side move source:path into dest:path. After this source:path will no longer exist. @@ -971,6 +1044,9 @@ Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination. +For the crypt remote there is a dedicated command, cryptcheck, that are +able to check the checksums of the crypted files. + If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. @@ -1106,7 +1182,7 @@ Or -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files -If you just want the directory names use "rclone lsf --dirs-only". +If you just want the directory names use rclone lsf --dirs-only. Any of the filtering options can be applied to this command. @@ -1211,6 +1287,10 @@ supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote. +For other algorithms, see the hashsum command. Running +rclone md5sum remote:path is equivalent to running +rclone hashsum MD5 remote:path. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -1246,6 +1326,10 @@ supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote. +For other algorithms, see the hashsum command. Running +rclone sha1sum remote:path is equivalent to running +rclone hashsum SHA1 remote:path. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -1274,6 +1358,23 @@ rclone size Prints the total size and number of objects in remote:path. +Synopsis + +Counts objects in the path and calculates the total size. Prints the +result to standard output. + +By default the output is in human-readable format, but shows values in +both human-readable format as well as the raw numbers (global option +--human-readable is not considered). Use option --json to format output +as JSON instead. + +Recurses by default, use --max-depth 1 to stop the recursion. + +Some backends do not always provide file sizes, see for example Google +Photos and Google Drive. Rclone will then show a notice in the log +indicating how many such files were encountered, and count them in as +empty files in the output of the size command. + rclone size remote:path [flags] Options @@ -1781,7 +1882,7 @@ SEE ALSO rclone completion -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell Synopsis @@ -1798,15 +1899,15 @@ See the global flags page for global options not listed here. SEE ALSO - rclone - Show help for rclone commands, flags and backends. -- rclone completion bash - generate the autocompletion script for bash -- rclone completion fish - generate the autocompletion script for fish -- rclone completion powershell - generate the autocompletion script +- rclone completion bash - Generate the autocompletion script for bash +- rclone completion fish - Generate the autocompletion script for fish +- rclone completion powershell - Generate the autocompletion script for powershell -- rclone completion zsh - generate the autocompletion script for zsh +- rclone completion zsh - Generate the autocompletion script for zsh rclone completion bash -generate the autocompletion script for bash +Generate the autocompletion script for bash Synopsis @@ -1815,12 +1916,19 @@ Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. -To load completions in your current shell session: $ source <(rclone -completion bash) +To load completions in your current shell session: -To load completions for every new session, execute once: Linux: $ rclone -completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone -completion bash > /usr/local/etc/bash_completion.d/rclone + source <(rclone completion bash) + +To load completions for every new session, execute once: + +Linux: + + rclone completion bash > /etc/bash_completion.d/rclone + +macOS: + + rclone completion bash > /usr/local/etc/bash_completion.d/rclone You will need to start a new shell for this setup to take effect. @@ -1835,22 +1943,24 @@ See the global flags page for global options not listed here. SEE ALSO -- rclone completion - generate the autocompletion script for the +- rclone completion - Generate the autocompletion script for the specified shell rclone completion fish -generate the autocompletion script for fish +Generate the autocompletion script for fish Synopsis Generate the autocompletion script for the fish shell. -To load completions in your current shell session: $ rclone completion -fish | source +To load completions in your current shell session: -To load completions for every new session, execute once: $ rclone -completion fish > ~/.config/fish/completions/rclone.fish + rclone completion fish | source + +To load completions for every new session, execute once: + + rclone completion fish > ~/.config/fish/completions/rclone.fish You will need to start a new shell for this setup to take effect. @@ -1865,19 +1975,20 @@ See the global flags page for global options not listed here. SEE ALSO -- rclone completion - generate the autocompletion script for the +- rclone completion - Generate the autocompletion script for the specified shell rclone completion powershell -generate the autocompletion script for powershell +Generate the autocompletion script for powershell Synopsis Generate the autocompletion script for powershell. -To load completions in your current shell session: PS C:> rclone -completion powershell | Out-String | Invoke-Expression +To load completions in your current shell session: + + rclone completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. @@ -1893,12 +2004,12 @@ See the global flags page for global options not listed here. SEE ALSO -- rclone completion - generate the autocompletion script for the +- rclone completion - Generate the autocompletion script for the specified shell rclone completion zsh -generate the autocompletion script for zsh +Generate the autocompletion script for zsh Synopsis @@ -1907,11 +2018,17 @@ Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: -$ echo "autoload -U compinit; compinit" >> ~/.zshrc + echo "autoload -U compinit; compinit" >> ~/.zshrc -To load completions for every new session, execute once: # Linux: $ -rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone -completion zsh > /usr/local/share/zsh/site-functions/_rclone +To load completions for every new session, execute once: + +Linux: + + rclone completion zsh > "${fpath[1]}/_rclone" + +macOS: + + rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone You will need to start a new shell for this setup to take effect. @@ -1926,7 +2043,7 @@ See the global flags page for global options not listed here. SEE ALSO -- rclone completion - generate the autocompletion script for the +- rclone completion - Generate the autocompletion script for the specified shell rclone config create @@ -2468,9 +2585,12 @@ Synopsis Download a URL's content and copy it to the destination without saving it in temporary storage. -Setting --auto-filename will cause the file name to be retrieved from -the URL (after any redirections) and used in the destination path. With ---print-filename in addition, the resulting file name will be printed. +Setting --auto-filename will attempt to automatically determine the +filename from the URL (after any redirections) and used in the +destination path. With --auto-filename-header in addition, if a specific +filename is set in HTTP headers, it will be used instead of the name +from the URL. With --print-filename in addition, the resulting file name +will be printed. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. @@ -2482,11 +2602,12 @@ to be written to standard output. Options - -a, --auto-filename Get the file name from the URL and use it for destination file path - -h, --help help for copyurl - --no-clobber Prevent overwriting file with same name - -p, --print-filename Print the resulting name from --auto-filename - --stdout Write the output to stdout rather than a file + -a, --auto-filename Get the file name from the URL and use it for destination file path + --header-filename Get the file name from the Content-Disposition header + -h, --help help for copyurl + --no-clobber Prevent overwriting file with same name + -p, --print-filename Print the resulting name from --auto-filename + --stdout Write the output to stdout rather than a file See the global flags page for global options not listed here. @@ -2586,7 +2707,7 @@ use it like this rclone cryptdecode --reverse encryptedremote: filename1 filename2 Another way to accomplish this is by using the rclone backend encode (or -decode)command. See the documentation on the crypt overlay for more +decode) command. See the documentation on the crypt overlay for more info. rclone cryptdecode encryptedremote: encryptedfilename [flags] @@ -2788,6 +2909,9 @@ supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote. +For the MD5 and SHA1 algorithms there are also dedicated commands, +md5sum and sha1sum. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -2803,6 +2927,7 @@ Run without a hash to see the list of all supported hashes, e.g. * crc32 * sha256 * dropbox + * hidrive * mailru * quickxor @@ -2879,7 +3004,7 @@ Synopsis rclone listremotes lists all the available remotes from the config file. -When uses with the -l flag it lists the types too. +When used with the --long flag it lists the types too. rclone listremotes [flags] @@ -2926,6 +3051,7 @@ just the path, but you can use these parameters to control the output: m - MimeType of object if known e - encrypted name T - tier of storage if known, e.g. "Hot" or "Cool" + M - Metadata of object in JSON blob format, eg {"key":"value"} So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. @@ -2940,10 +3066,10 @@ Eg 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, -use the "--hash" flag to change which hash you want. Note that this can -be returned as an empty string if it isn't available on the object (and -for directories), "ERROR" if there was an error reading it from the -object and "UNSUPPORTED" if that object does not support that hash type. +use the --hash flag to change which hash you want. Note that this can be +returned as an empty string if it isn't available on the object (and for +directories), "ERROR" if there was an error reading it from the object +and "UNSUPPORTED" if that object does not support that hash type. For example, to emulate the md5sum command you can use @@ -3046,15 +3172,25 @@ List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this -{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", -"MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : -"ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, -"ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : -false, "IsDir" : false, "MimeType" : "application/octet-stream", -"ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", -"Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : -"kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : -"full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } + { + "Hashes" : { + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + }, + "ID": "y2djkhiujf83u33", + "OrigID": "UYOJVTUW00Q1RzTDA", + "IsBucket" : false, + "IsDir" : false, + "MimeType" : "application/octet-stream", + "ModTime" : "2017-05-31T16:15:57.034468261+01:00", + "Name" : "file.txt", + "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", + "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", + "Path" : "full/path/goes/here/file.txt", + "Size" : 6, + "Tier" : "hot", + } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may @@ -3076,6 +3212,9 @@ returned If --files-only is not specified directories in addition to the files will be returned. +If --metadata is set then an additional Metadata key will be returned. +This will have metdata in rclone standard format as a JSON object. + if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item @@ -3130,7 +3269,7 @@ bucket-based remotes). Options --dirs-only Show only directories in the listing - -M, --encrypted Show the encrypted names + --encrypted Show the encrypted names --files-only Show only files in the listing --hash Include hashes in the output (may take longer) --hash-type stringArray Show only this hash type (may be repeated) @@ -3553,7 +3692,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -3706,6 +3845,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -3746,7 +3916,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -3758,8 +3928,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -3777,23 +3947,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -3802,6 +3971,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -3846,7 +4023,7 @@ Options --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -3854,6 +4031,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -3934,7 +4113,8 @@ builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. -Here are the keys - press '?' to toggle the help on and off +You can interact with the user interface using key presses, press '?' to +toggle the help on and off. The supported keys are: ↑,↓ or k,j to Move →,l to enter @@ -3945,17 +4125,39 @@ Here are the keys - press '?' to toggle the help on and off u toggle human-readable format n,s,C,A sort by name,size,count,average size d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories y copy current path to clipboard Y display current path - ^L refresh screen + ^L refresh screen (fix screen corruption) ? to toggle help on and off - q/ESC/c-C to quit + q/ESC/^c to quit + +Listed files/directories may be prefixed by a one-character flag, some +of them combined with a description in brackes at end of line. These +flags have the following meaning: + + e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) + ~ means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). + . means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) + ! means an error occurred while reading this directory This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. -Note that it might take some time to delete big files/folders. The UI -won't respond in the meantime since the deletion is done synchronously. +Note that it might take some time to delete big files/directories. The +UI won't respond in the meantime since the deletion is done +synchronously. + +For a non-interactive listing of the remote, see the tree command. To +just get the total size of the remote you can also use the size command. rclone ncdu remote:path [flags] @@ -3989,7 +4191,7 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. -echo "secretpassword" | rclone obscure - + echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. @@ -4034,9 +4236,9 @@ instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value -options in the form "-o key=value" or "-o key". It can be repeated as -many times as required. This is useful for rc commands which take the -"opt" parameter which by convention is a dictionary of strings. +options in the form -o key=value or -o key. It can be repeated as many +times as required. This is useful for rc commands which take the "opt" +parameter which by convention is a dictionary of strings. -o key=value -o key2 @@ -4055,13 +4257,13 @@ Will place this in the "arg" value ["value", "value2"] -Use --loopback to connect to the rclone instance running "rclone rc". -This is very useful for testing commands without having to run an rclone -rc server, e.g.: +Use --loopback to connect to the rclone instance running rclone rc. This +is very useful for testing commands without having to run an rclone rc +server, e.g.: rclone rc --loopback operations/about fs=/ -Use "rclone rc" to see a list of all possible commands. +Use rclone rc to see a list of all possible commands. rclone rc commands parameter [flags] @@ -4106,12 +4308,12 @@ before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. -Use the |--size| flag to preallocate the file in advance at the remote -end and actually stream it, even if remote backend doesn't support +Use the --size flag to preallocate the file in advance at the remote end +and actually stream it, even if remote backend doesn't support streaming. -|--size| should be the exact size of the input stream in bytes. If the -size of the stream is different in length to the |--size| passed in then +--size should be the exact size of the input stream in bytes. If the +size of the stream is different in length to the --size passed in then the transfer will likely fail. Note that the upload can also not be retried because the data is not @@ -4271,8 +4473,8 @@ Serve a remote over a protocol. Synopsis -rclone serve is used to serve a remote over a given protocol. This -command requires the use of a subcommand to specify the protocol, e.g. +Serve a remote over a given protocol. Requires the use of a subcommand +to specify the protocol, e.g. rclone serve http remote: @@ -4296,7 +4498,7 @@ SEE ALSO - rclone serve http - Serve the remote over HTTP. - rclone serve restic - Serve the remote for restic's REST API. - rclone serve sftp - Serve the remote over SFTP. -- rclone serve webdav - Serve remote:path over webdav. +- rclone serve webdav - Serve remote:path over WebDAV. rclone serve dlna @@ -4304,11 +4506,11 @@ Serve remote:path over DLNA Synopsis -rclone serve dlna is a DLNA media server for media stored in an rclone -remote. Many devices, such as the Xbox and PlayStation, can -automatically discover this server in the LAN and play audio/video from -it. VLC is also supported. Service discovery uses UDP multicast packets -(SSDP) and will thus only work on LANs. +Run a DLNA media server for media stored in an rclone remote. Many +devices, such as the Xbox and PlayStation, can automatically discover +this server in the LAN and play audio/video from it. VLC is also +supported. Service discovery uses UDP multicast packets (SSDP) and will +thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no @@ -4344,7 +4546,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -4497,6 +4699,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -4537,7 +4770,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -4549,8 +4782,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -4568,23 +4801,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -4593,6 +4825,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -4623,7 +4863,7 @@ Options --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -4631,6 +4871,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -4707,7 +4949,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -4860,6 +5102,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -4900,7 +5173,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -4912,8 +5185,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -4931,23 +5204,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -4956,6 +5228,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5003,7 +5283,7 @@ Options --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --socket-addr string Address or absolute path (default: /run/docker/plugins/rclone.sock) --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) @@ -5013,6 +5293,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5035,9 +5317,9 @@ Serve remote:path over FTP. Synopsis -rclone serve ftp implements a basic ftp server to serve the remote over -FTP protocol. This can be viewed with a ftp client or you can make a -remote of type ftp to read and write it. +Run a basic FTP server to serve a remote over FTP protocol. This can be +viewed with a FTP client or you can make a remote of type FTP to read +and write it. Server options @@ -5074,7 +5356,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -5227,6 +5509,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -5267,7 +5580,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -5279,8 +5592,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -5298,23 +5611,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -5323,6 +5635,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5428,7 +5748,7 @@ Options --passive-port string Passive port range to use (default "30000-32000") --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) --public-ip string Public IP address to advertise for passive connections - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") @@ -5437,6 +5757,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5457,9 +5779,8 @@ Serve the remote over HTTP. Synopsis -rclone serve http implements a basic web server to serve the remote over -HTTP. This can be viewed in a web browser or you can make a remote of -type http read from it. +Run a basic web server to serve a remote over HTTP. This can be viewed +in a web browser or you can make a remote of type http read from it. You can use the filter flags (e.g. --include, --exclude) to control what is served. @@ -5490,8 +5811,9 @@ accept in the HTTP header. rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading -and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl -"/rclone" and --baseurl "/rclone/" are all treated identically. +and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. SSL/TLS @@ -5507,8 +5829,8 @@ authority certificate. Template ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup to +--template allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- @@ -5598,7 +5920,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -5751,6 +6073,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -5791,7 +6144,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -5803,8 +6156,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -5822,23 +6175,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -5847,6 +6199,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -5882,7 +6242,7 @@ Options --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication --salt string Password hashing salt (default "dlPL2MqE") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) @@ -5896,6 +6256,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -5916,9 +6278,9 @@ Serve the remote for restic's REST API. Synopsis -rclone serve restic implements restic's REST backend API over HTTP. This -allows restic to use rclone as a data storage mechanism for cloud -providers that restic does not support directly. +Run a basic web server to serve a remove over restic's REST backend API +over HTTP. This allows restic to use rclone as a data storage mechanism +for cloud providers that restic does not support directly. Restic is a command-line program for doing backups. @@ -5944,7 +6306,7 @@ Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with -use of the "--addr" flag. +use of the --addr flag. You might wish to start this server on boot. @@ -5992,7 +6354,7 @@ must end with /. Eg Private repositories -The "--private-repos" flag can be used to limit users to repositories +The--private-repos flag can be used to limit users to repositories starting with a path of //. Server options @@ -6016,11 +6378,12 @@ accept in the HTTP header. rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading -and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl -"/rclone" and --baseurl "/rclone/" are all treated identically. +and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup to +--template allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- @@ -6092,8 +6455,8 @@ Use --realm to set the authentication realm. SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you wish +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. @@ -6137,9 +6500,8 @@ Serve the remote over SFTP. Synopsis -rclone serve sftp implements an SFTP server to serve the remote over -SFTP. This can be used with an SFTP client or you can make a remote of -type sftp to use with it. +Run a SFTP server to serve a remote over SFTP. This can be used with an +SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (e.g. --include, --exclude) to control what is served. @@ -6161,13 +6523,13 @@ command when paired with the rclone sftp backend. If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache -directory (see "rclone help flags cache-dir") in the "serve-sftp" +directory (see rclone help flags cache-dir) in the "serve-sftp" directory. By default the server binds to localhost:2022 - if you want it to be -reachable externally then supply "--addr :2022" for example. +reachable externally then supply --addr :2022 for example. -Note that the default of "--vfs-cache-mode off" is fine for the rclone +Note that the default of --vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with other SFTP clients. If --stdio is specified, rclone will serve SFTP over stdio, which can be @@ -6175,7 +6537,7 @@ used with sshd via ~/.ssh/authorized_keys, for example: restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... -On the client you need to set "--transfers 1" when using --stdio. +On the client you need to set --transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send @@ -6205,7 +6567,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -6358,6 +6720,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -6398,7 +6791,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -6410,8 +6803,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -6429,23 +6822,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -6454,6 +6846,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -6558,7 +6958,7 @@ Options --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --stdio Run an sftp server on run stdin/stdout --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) @@ -6568,6 +6968,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -6584,16 +6986,15 @@ SEE ALSO rclone serve webdav -Serve remote:path over webdav. +Serve remote:path over WebDAV. Synopsis -rclone serve webdav implements a basic webdav server to serve the remote -over HTTP via the webdav protocol. This can be viewed with a webdav -client, through a web browser, or you can make a remote of type webdav -to read and write it. +Run a basic WebDAV server to serve a remote over HTTP via the WebDAV +protocol. This can be viewed with a WebDAV client, through a web +browser, or you can make a remote of type WebDAV to read and write it. -Webdav options +WebDAV options --etag-hash @@ -6602,9 +7003,7 @@ on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" -or "SHA-1". - -Use "rclone hashsum" to see the full list. +or "SHA-1". Use the hashsum command to see the full list. Server options @@ -6627,11 +7026,12 @@ accept in the HTTP header. rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading -and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl -"/rclone" and --baseurl "/rclone/" are all treated identically. +and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup to +--template allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- @@ -6703,8 +7103,8 @@ Use --realm to set the authentication realm. SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you wish +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. @@ -6731,7 +7131,7 @@ VFS Directory Cache Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -6884,6 +7284,37 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -6924,7 +7355,7 @@ of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write @@ -6936,8 +7367,8 @@ cache file. When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of -parallel uploads of modified files from cache (the related global flag ---checkers have no effect on mount). +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -6955,23 +7386,22 @@ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The --vfs-case-insensitive mount flag controls how rclone handles these +The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the -mounted file system as-is. If the flag is "true" (or appears without a -value on command line), rclone may perform a "fixup" as explained below. +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument -refers to an existing file with exactly the same name, then the case of -the existing file on the disk will be used. However, if a file name with -exactly the same name is not found but a name differing only by case -exists, rclone will transparently fixup the name. This fixup happens -only when an existing file is requested. Case sensitivity of file names -created anew by rclone is controlled by an underlying mounted file -system. +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. @@ -6980,6 +7410,14 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -7089,7 +7527,7 @@ Options --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) @@ -7102,6 +7540,8 @@ Options --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) @@ -7188,6 +7628,8 @@ SEE ALSO - rclone test histogram - Makes a histogram of file name characters. - rclone test info - Discovers file name or other limitations for paths. +- rclone test makefile - Make files with random contents of the size + given - rclone test makefiles - Make a random file hierarchy in a directory - rclone test memory - Load all the objects at remote:path into memory and report memory stats. @@ -7265,6 +7707,28 @@ SEE ALSO - rclone test - Run a test command +rclone test makefile + +Make files with random contents of the size given + + rclone test makefile []+ [flags] + +Options + + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + -h, --help help for makefile + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone test - Run a test command + rclone test makefiles Make a random file hierarchy in a directory @@ -7273,6 +7737,8 @@ Make a random file hierarchy in a directory Options + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern --files int Number of files to create (default 1000) --files-per-directory int Average number of files per directory (default 10) -h, --help help for makefiles @@ -7280,7 +7746,10 @@ Options --max-name-length int Maximum size of file names (default 12) --min-file-size SizeSuffix Minimum size of file to create --min-name-length int Minimum size of file names (default 4) + --pattern Fill files with a periodic pattern --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 See the global flags page for global options not listed here. @@ -7369,11 +7838,14 @@ For example 1 directories, 5 files You can use any of the filtering options with the tree command (e.g. ---include and --exclude). You can also use --fast-list. +--include and --exclude. You can also use --fast-list. The tree command has many options for controlling the listing which are -compatible with the tree command. Note that not all of them have short -options as they conflict with rclone's short options. +compatible with the tree command, for example you can include file sizes +with --size. Note that not all of them have short options as they +conflict with rclone's short options. + +For a more interactive navigation of the remote see the ncdu command. rclone tree remote:path [flags] @@ -7685,6 +8157,147 @@ This can be used when scripting to make aged backups efficiently, e.g. rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup +Metadata support + +Metadata is data about a file which isn't the contents of the file. +Normally rclone only preserves the modification time and the content +(MIME) type where possible. + +Rclone supports preserving all the available metadata on files (not +directories) when using the --metadata or -M flag. + +Exactly what metadata is supported and what that support means depends +on the backend. Backends that support metadata have a metadata section +in their docs and are listed in the features table (Eg local, s3) + +Rclone only supports a one-time sync of metadata. This means that +metadata will be synced from the source object to the destination object +only when the source object has changed and needs to be re-uploaded. If +the metadata subsequently changes on the source object without changing +the object itself then it won't be synced to the destination object. +This is in line with the way rclone syncs Content-Type without the +--metadata flag. + +Using --metadata when syncing from local to local will preserve file +attributes such as file mode, owner, extended attributes (not Windows). + +Note that arbitrary metadata may be added to objects using the +--metadata-set key=value flag when the object is first uploaded. This +flag can be repeated as many times as necessary. + +Types of metadata + +Metadata is divided into two type. System metadata and User metadata. + +Metadata which the backend uses itself is called system metadata. For +example on the local backend the system metadata uid will store the user +ID of the file when used on a unix based platform. + +Arbitrary metadata is called user metadata and this can be set however +is desired. + +When objects are copied from backend to backend, they will attempt to +interpret system metadata if it is supplied. Metadata may change from +being user metadata to system metadata as objects are copied between +different backends. For example copying an object from s3 sets the +content-type metadata. In a backend which understands this (like +azureblob) this will become the Content-Type of the object. In a backend +which doesn't understand this (like the local backend) this will become +user metadata. However should the local object be copied back to s3, the +Content-Type will be set correctly. + +Metadata framework + +Rclone implements a metadata framework which can read metadata from an +object and write it to the object when (and only when) it is being +uploaded. + +This metadata is stored as a dictionary with string keys and string +values. + +There are some limits on the names of the keys (these may be clarified +further in the future). + +- must be lower case +- may be a-z 0-9 containing . - or _ +- length is backend dependent + +Each backend can provide system metadata that it understands. Some +backends can also store arbitrary user metadata. + +Where possible the key names are standardized, so, for example, it is +possible to copy object metadata from s3 to azureblob for example and +metadata will be translated apropriately. + +Some backends have limits on the size of the metadata and rclone will +give errors on upload if they are exceeded. + +Metadata preservation + +The goal of the implementation is to + +1. Preserve metadata if at all possible +2. Interpret metadata if at all possible + +The consequences of 1 is that you can copy an S3 object to a local disk +then back to S3 losslessly. Likewise you can copy a local file with file +attributes and xattrs from local disk to s3 and back again losslessly. + +The consequence of 2 is that you can copy an S3 object with metadata to +Azureblob (say) and have the metadata appear on the Azureblob object +also. + +Standard system metadata + +Here is a table of standard system metadata which, if appropriate, a +backend may implement. + + ---------------------------------------------------------------------------------------------- + key description example + ---------------------------------- --------------------- ------------------------------------- + mode File type and mode: 0100664 + octal, unix style + + uid User ID of owner: 500 + decimal number + + gid Group ID of owner: 500 + decimal number + + rdev Device ID (if special 0 + file) => hexadecimal + + atime Time of last access: 2006-01-02T15:04:05.999999999Z07:00 + RFC 3339 + + mtime Time of last 2006-01-02T15:04:05.999999999Z07:00 + modification: RFC + 3339 + + btime Time of file creation 2006-01-02T15:04:05.999999999Z07:00 + (birth): RFC 3339 + + cache-control Cache-Control header no-cache + + content-disposition Content-Disposition inline + header + + content-encoding Content-Encoding gzip + header + + content-language Content-Language en-US + header + + content-type Content-Type header text/plain + ---------------------------------------------------------------------------------------------- + +The metadata keys mtime and content-type will take precedence if +supplied in the metadata over reading the Content-Type or modification +time of the source object. + +Hashes are not included in system metadata as there is a well defined +way of reading those already. + Options Rclone has a number of options to control its behaviour. @@ -7896,12 +8509,22 @@ held in memory before the transfers start. --checkers=N -The number of checkers to run in parallel. Checkers do the equality -checking of files during a sync. For some storage systems (e.g. S3, -Swift, Dropbox) this can take a significant amount of time so they are -run in parallel. +Originally controlling just the number of file checkers to run in +parallel, e.g. by rclone copy. Now a fairly universal parallelism +control used by rclone in several places. -The default is to run 8 checkers in parallel. +Note: checkers do the equality checking of files during a sync. For some +storage systems (e.g. S3, Swift, Dropbox) this can take a significant +amount of time so they are run in parallel. + +The default is to run 8 checkers in parallel. However, in case of +slow-reacting backends you may need to lower (rather than increase) this +default by setting --checkers to 4 or less threads. This is especially +advised if you are experiencing backend server crashes during file +checking phase (e.g. on subsequent or top-up backups where little or no +file copying is done and checking takes up most of the time). Increase +this setting only with utmost care, while monitoring your server health +and file checking throughput. -c, --checksum @@ -8047,8 +8670,9 @@ See --compare-dest and --backup-dir. --dedupe-mode MODE Mode to run dedupe command in. One of interactive, skip, first, newest, -oldest, rename. The default is interactive. See the dedupe command for -more information as to what these options mean. +oldest, rename. The default is interactive. +See the dedupe command for more information as to what these options +mean. --disable FEATURE,FEATURE,... @@ -8444,6 +9068,17 @@ When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. +--metadata / -M + +Setting this flag enables rclone to copy the metadata from the source to +the destination. For local backends this is ownership, permissions, +xattr etc. See the #metadata for more info. + +--metadata-set key=value + +Add metadata key = value when uploading. This can be repeated as many +times as required. See the #metadata for more info. + --cutoff-mode=hard|soft|cautious This modifies the behavior of --max-transfer Defaults to @@ -9047,6 +9682,9 @@ timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. +Look at --multi-thread-streams if you would like to control single file +transfers. + -u, --update This forces rclone to skip any files which exist on the destination and @@ -9119,6 +9757,9 @@ With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. +When setting verbosity as an environment variable, use RCLONE_VERBOSE=1 +or RCLONE_VERBOSE=2 for -v and -vv respectively. + -V, --version Prints the version number @@ -9360,6 +10001,7 @@ For the filtering options - --filter-from - --exclude - --exclude-from +- --exclude-if-present - --include - --include-from - --files-from @@ -9466,6 +10108,9 @@ the environment variable setting. Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true. +Verbosity is slightly different, the environment variable equivalent of +--verbose or -v is RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2. + The same parser is used for the options and the environment variables so they take exactly the same form. @@ -9647,6 +10292,29 @@ Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use rclone config file on the remote box to find out where). +Configuring using SSH Tunnel + +Linux and MacOS users can utilize SSH Tunnel to redirect the headless +box port 53682 to local machine by using the following command: + + ssh -L localhost:53682:localhost:53682 username@remote_server + +Then on the headless box run rclone config and answer Y to the +Use auto config? question. + + ... + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes (default) + n) No + y/n> y + +Then copy and paste the auth url +http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the browser on your +local machine, complete the auth and it is done. + Filtering, includes and excludes Filter flags determine which files rclone sync, move, ls, lsl, md5sum, @@ -9774,7 +10442,11 @@ regular expression syntax. The regular expressions used are as defined in the Go regular expression reference. Regular expressions should be enclosed in {{ }}. They will match only the last path segment if the glob doesn't start with / or the -whole path name if it does. +whole path name if it does. Note that rclone does not attempt to parse +the supplied regular expression, meaning that using any regular +expression filter will prevent rclone from using directory filter rules, +as it will instead check every path against the supplied regular +expression(s). Here is how the {{regexp}} is transformed into an full regular expression to match the entire path: @@ -9906,10 +10578,14 @@ remote by avoiding listing unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content. +If any regular expression filters are in use, then no directory +recursion optimisation is possible, as rclone must check every path +against the supplied regular expression(s). + Directory recursion optimisation occurs if either: - A source remote does not support the rclone ListR primitive. local, - sftp, Microsoft OneDrive and WebDav do not support ListR. Google + sftp, Microsoft OneDrive and WebDAV do not support ListR. Google Drive and most bucket type storage do. Full list - On other remotes (those that support ListR), if the rclone command @@ -10380,7 +11056,8 @@ Exclude directory based on a file The --exclude-if-present flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within -it. +it. The flag can be repeated to check for multiple file names, presence +of any of them will exclude the directory. This flag has a priority over other filter flags. @@ -10394,8 +11071,6 @@ E.g. for the following directory structure: The command rclone ls --exclude-if-present .ignore dir1 does not list dir3, file3 or .ignore. ---exclude-if-present can only be used once in an rclone command. - Common pitfalls The most frequent filter support issues on the rclone forum are: @@ -10518,7 +11193,7 @@ Remote controlling rclone with its API If rclone is run with the --rc flag then it starts an HTTP server which can be used to remote control rclone using its API. -You can either use the rclone rc command to access the API or use HTTP +You can either use the rc command to access the API or use HTTP directly. If you just want to run a remote control then see the rcd command. @@ -10666,6 +11341,16 @@ use these credentials in the request. Default Off. +--rc-baseurl + +Prefix for URLs. + +Default is root + +--rc-template + +User-specified template. + Accessing the remote control via the rclone rc command Rclone itself implements the remote control protocol in its rclone rc @@ -11023,7 +11708,7 @@ This takes the following parameters: - state - state to restart with - used with continue - result - result to restart with - used with continue -See the config create command command for more information on the above. +See the config create command for more information on the above. Authentication is required for this call. @@ -11033,7 +11718,7 @@ Parameters: - name - name of remote to delete -See the config delete command command for more information on the above. +See the config delete command for more information on the above. Authentication is required for this call. @@ -11043,7 +11728,7 @@ Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. -See the config dump command command for more information on the above. +See the config dump command for more information on the above. Authentication is required for this call. @@ -11053,7 +11738,7 @@ Parameters: - name - name of remote to get -See the config dump command command for more information on the above. +See the config dump command for more information on the above. Authentication is required for this call. @@ -11061,7 +11746,7 @@ config/listremotes: Lists the remotes in the config file. Returns - remotes - array of remote names -See the listremotes command command for more information on the above. +See the listremotes command for more information on the above. Authentication is required for this call. @@ -11072,8 +11757,7 @@ This takes the following parameters: - name - name of remote - parameters - a map of { "key": "value" } pairs -See the config password command command for more information on the -above. +See the config password command for more information on the above. Authentication is required for this call. @@ -11081,8 +11765,7 @@ config/providers: Shows how providers are configured in the config file. Returns a JSON object: - providers - array of objects -See the config providers command command for more information on the -above. +See the config providers command for more information on the above. Authentication is required for this call. @@ -11102,7 +11785,7 @@ This takes the following parameters: - state - state to restart with - used with continue - result - result to restart with - used with continue -See the config update command command for more information on the above. +See the config update command for more information on the above. Authentication is required for this call. @@ -11552,7 +12235,7 @@ This takes the following parameters: The result is as returned from rclone about --json -See the about command command for more information on the above. +See the about command for more information on the above. Authentication is required for this call. @@ -11562,7 +12245,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" -See the cleanup command command for more information on the above. +See the cleanup command for more information on the above. Authentication is required for this call. @@ -11586,8 +12269,9 @@ This takes the following parameters: - remote - a path within that remote e.g. "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file - name from url See the copyurl command command for more information - on the above. + name from url + +See the copyurl command for more information on the above. Authentication is required for this call. @@ -11597,7 +12281,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" -See the delete command command for more information on the above. +See the delete command for more information on the above. Authentication is required for this call. @@ -11608,7 +12292,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the deletefile command command for more information on the above. +See the deletefile command for more information on the above. Authentication is required for this call. @@ -11621,46 +12305,103 @@ This takes the following parameters: This returns info about the remote passed in; { - // optional features and whether they are available or not - "Features": { - "About": true, - "BucketBased": false, - "CanHaveEmptyDirectories": true, - "CaseInsensitive": false, - "ChangeNotify": false, - "CleanUp": false, - "Copy": false, - "DirCacheFlush": false, - "DirMove": true, - "DuplicateFiles": false, - "GetTier": false, - "ListR": false, - "MergeDirs": false, - "Move": true, - "OpenWriterAt": true, - "PublicLink": false, - "Purge": true, - "PutStream": true, - "PutUnchecked": false, - "ReadMimeType": false, - "ServerSideAcrossConfigs": false, - "SetTier": false, - "SetWrapper": false, - "UnWrap": false, - "WrapFs": false, - "WriteMimeType": false - }, - // Names of hashes available - "Hashes": [ - "MD5", - "SHA-1", - "DropboxHash", - "QuickXorHash" - ], - "Name": "local", // Name as created - "Precision": 1, // Precision of timestamps in ns - "Root": "/", // Path as created - "String": "Local file system at /" // how the remote will appear in logs + // optional features and whether they are available or not + "Features": { + "About": true, + "BucketBased": false, + "BucketBasedRootOK": false, + "CanHaveEmptyDirectories": true, + "CaseInsensitive": false, + "ChangeNotify": false, + "CleanUp": false, + "Command": true, + "Copy": false, + "DirCacheFlush": false, + "DirMove": true, + "Disconnect": false, + "DuplicateFiles": false, + "GetTier": false, + "IsLocal": true, + "ListR": false, + "MergeDirs": false, + "MetadataInfo": true, + "Move": true, + "OpenWriterAt": true, + "PublicLink": false, + "Purge": true, + "PutStream": true, + "PutUnchecked": false, + "ReadMetadata": true, + "ReadMimeType": false, + "ServerSideAcrossConfigs": false, + "SetTier": false, + "SetWrapper": false, + "Shutdown": false, + "SlowHash": true, + "SlowModTime": false, + "UnWrap": false, + "UserInfo": false, + "UserMetadata": true, + "WrapFs": false, + "WriteMetadata": true, + "WriteMimeType": false + }, + // Names of hashes available + "Hashes": [ + "md5", + "sha1", + "whirlpool", + "crc32", + "sha256", + "dropbox", + "mailru", + "quickxor" + ], + "Name": "local", // Name as created + "Precision": 1, // Precision of timestamps in ns + "Root": "/", // Path as created + "String": "Local file system at /", // how the remote will appear in logs + // Information about the system metadata for this backend + "MetadataInfo": { + "System": { + "atime": { + "Help": "Time of last access", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "btime": { + "Help": "Time of file birth (creation)", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "gid": { + "Help": "Group ID of owner", + "Type": "decimal number", + "Example": "500" + }, + "mode": { + "Help": "File type and mode", + "Type": "octal, unix style", + "Example": "0100664" + }, + "mtime": { + "Help": "Time of last modification", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "rdev": { + "Help": "Device ID (if special file)", + "Type": "hexadecimal", + "Example": "1abc" + }, + "uid": { + "Help": "User ID of owner", + "Type": "decimal number", + "Example": "500" + } + }, + "Help": "Textual help string\n" + } } This command does not have a command line equivalent so use this @@ -11683,6 +12424,7 @@ This takes the following parameters: - noMimeType - If set don't show mime types - dirsOnly - If set only show directories - filesOnly - If set only show files + - metadata - If set return metadata of objects also - hashTypes - array of strings of hash types to show if showHash set @@ -11702,7 +12444,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the mkdir command command for more information on the above. +See the mkdir command for more information on the above. Authentication is required for this call. @@ -11732,7 +12474,7 @@ Returns: - url - URL of the resource -See the link command command for more information on the above. +See the link command for more information on the above. Authentication is required for this call. @@ -11743,7 +12485,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the purge command command for more information on the above. +See the purge command for more information on the above. Authentication is required for this call. @@ -11754,7 +12496,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the rmdir command command for more information on the above. +See the rmdir command for more information on the above. Authentication is required for this call. @@ -11764,8 +12506,9 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -- leaveRoot - boolean, set to true not to delete the root See the - rmdirs command command for more information on the above. +- leaveRoot - boolean, set to true not to delete the root + +See the rmdirs command for more information on the above. Authentication is required for this call. @@ -11780,7 +12523,7 @@ Returns: - count - number of files - bytes - number of bytes in those files -See the size command command for more information on the above. +See the size command for more information on the above. Authentication is required for this call. @@ -11811,8 +12554,9 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -- each part in body represents a file to be uploaded See the - uploadfile command command for more information on the above. +- each part in body represents a file to be uploaded + +See the uploadfile command for more information on the above. Authentication is required for this call. @@ -12035,7 +12779,7 @@ This takes the following parameters: - createEmptySrcDirs - create empty src directories on destination if set -See the copy command command for more information on the above. +See the copy command for more information on the above. Authentication is required for this call. @@ -12049,7 +12793,7 @@ This takes the following parameters: set - deleteEmptySrcDirs - delete empty src directories if set -See the move command command for more information on the above. +See the move command for more information on the above. Authentication is required for this call. @@ -12062,7 +12806,7 @@ This takes the following parameters: - createEmptySrcDirs - create empty src directories on destination if set -See the sync command command for more information on the above. +See the sync command for more information on the above. Authentication is required for this call. @@ -12380,47 +13124,49 @@ Features Here is an overview of the major features of each cloud storage system. - Name Hash ModTime Case Insensitive Duplicate Files MIME Type - ------------------------------ ------------- --------- ------------------ ----------------- ----------- - 1Fichier Whirlpool No No Yes R - Akamai Netstorage MD5, SHA256 Yes No No R - Amazon Drive MD5 No Yes No R - Amazon S3 (or S3 compatible) MD5 Yes No No R/W - Backblaze B2 SHA1 Yes No No R/W - Box SHA1 Yes Yes No - - Citrix ShareFile MD5 Yes Yes No - - Dropbox DBHASH ¹ Yes Yes No - - Enterprise File Fabric - Yes Yes No R/W - FTP - No No No - - Google Cloud Storage MD5 Yes No No R/W - Google Drive MD5 Yes No Yes R/W - Google Photos - No No Yes R - HDFS - Yes No No - - HTTP - No No No R - Hubic MD5 Yes No No R/W - Jottacloud MD5 Yes Yes No R - Koofr MD5 No Yes No - - Mail.ru Cloud Mailru ⁶ Yes Yes No - - Mega - No No Yes - - Memory MD5 Yes No No - - Microsoft Azure Blob Storage MD5 Yes No No R/W - Microsoft OneDrive SHA1 ⁵ Yes Yes No R - OpenDrive MD5 Yes Yes Partial ⁸ - - OpenStack Swift MD5 Yes No No R/W - pCloud MD5, SHA1 ⁷ Yes No No W - premiumize.me - No Yes No R - put.io CRC-32 Yes No Yes R - QingStor MD5 No No No R/W - Seafile - No No No - - SFTP MD5, SHA1 ² Yes Depends No - - Sia - No No No - - SugarSync - No No No - - Storj - Yes No No - - Uptobox - No No Yes - - WebDAV MD5, SHA1 ³ Yes ⁴ Depends No - - Yandex Disk MD5 Yes No No R - Zoho WorkDrive - No No No - - The local filesystem All Yes Depends No - + Name Hash ModTime Case Insensitive Duplicate Files MIME Type Metadata + ------------------------------ ------------------ --------- ------------------ ----------------- ----------- ---------- + 1Fichier Whirlpool - No Yes R - + Akamai Netstorage MD5, SHA256 R/W No No R - + Amazon Drive MD5 - Yes No R - + Amazon S3 (or S3 compatible) MD5 R/W No No R/W RWU + Backblaze B2 SHA1 R/W No No R/W - + Box SHA1 R/W Yes No - - + Citrix ShareFile MD5 R/W Yes No - - + Dropbox DBHASH ¹ R Yes No - - + Enterprise File Fabric - R/W Yes No R/W - + FTP - R/W ¹⁰ No No - - + Google Cloud Storage MD5 R/W No No R/W - + Google Drive MD5 R/W No Yes R/W - + Google Photos - - No Yes R - + HDFS - R/W No No - - + HiDrive HiDrive ¹² R/W No No - - + HTTP - R No No R - + Hubic MD5 R/W No No R/W - + Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU + Jottacloud MD5 R/W Yes No R - + Koofr MD5 - Yes No - - + Mail.ru Cloud Mailru ⁶ R/W Yes No - - + Mega - - No Yes - - + Memory MD5 R/W No No - - + Microsoft Azure Blob Storage MD5 R/W No No R/W - + Microsoft OneDrive SHA1 ⁵ R/W Yes No R - + OpenDrive MD5 R/W Yes Partial ⁸ - - + OpenStack Swift MD5 R/W No No R/W - + pCloud MD5, SHA1 ⁷ R No No W - + premiumize.me - - Yes No R - + put.io CRC-32 R/W No Yes R - + QingStor MD5 - ⁹ No No R/W - + Seafile - - No No - - + SFTP MD5, SHA1 ² R/W Depends No - - + Sia - - No No - - + SugarSync - - No No - - + Storj - R No No - - + Uptobox - - No Yes - - + WebDAV MD5, SHA1 ³ R ⁴ Depends No - - + Yandex Disk MD5 R/W No No R - + Zoho WorkDrive - - No No - - + The local filesystem All R/W Depends No - RWU Notes @@ -12447,6 +13193,17 @@ platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature. +⁹ QingStor does not support SetModTime for objects bigger than 5 GiB. + +¹⁰ FTP supports modtimes for the major FTP servers, and also others if +they advertised required protocol extensions. See this for more details. + +¹¹ Internet Archive requires option wait_archive to be set to a non-zero +value for full modtime support. + +¹² HiDrive supports its own custom hash. It combines SHA1 sums for each +4 KiB block hierarchically to a single top-level sum. + Hash The cloud storage system supports various hash types of the objects. The @@ -12459,13 +13216,34 @@ systems they must support a common hash type. ModTime -The cloud storage system supports setting modification times on objects. -If it does then this enables a using the modification times as part of -the sync. If not then only the size will be checked by default, though -the MD5SUM can be checked with the --checksum flag. +Allmost all cloud storage systems store some sort of timestamp on +objects, but several of them not something that is appropriate to use +for syncing. E.g. some backends will only write a timestamp that +represent the time of the upload. To be relevant for syncing it should +be able to store the modification time of the source object. If this is +not the case, rclone will only check the file size by default, though +can be configured to check the file hash (with the --checksum flag). +Ideally it should also be possible to change the timestamp of an +existing file without having to re-upload it. -All cloud storage systems support some kind of date on the object and -these will be set when transferring from the cloud storage system. +Storage systems with a - in the ModTime column, means the modification +read on objects is not the modification time of the file when uploaded. +It is most likely the time the file was uploaded, or possibly something +else (like the time the picture was taken in Google Photos). + +Storage systems with a R (for read-only) in the ModTime column, means +the it keeps modification times on objects, and updates them when +uploading objects, but it does not support changing only the +modification time (SetModTime operation) without re-uploading, possibly +not even without deleting existing first. Some operations in rclone, +such as copy and sync commands, will automatically check for SetModTime +support and re-upload if necessary to keep the modification times in +sync. Other commands will not work without SetModTime support, e.g. +touch command on an existing file will fail, and changes to modification +time only on a files in a mount will be silently ignored. + +Storage systems with R/W (for read/write) in the ModTime column, means +they do also support modtime-only operations. Case Insensitive @@ -12665,35 +13443,77 @@ of all possible values by passing an invalid value to this flag, e.g. --local-encoding "help". The command rclone help flags encoding will show you the defaults for the backends. - Encoding Characters - --------------- ------------------------------------------------------------- - Asterisk * - BackQuote ` - BackSlash \ - Colon : - CrLf CR 0x0D, LF 0x0A - Ctl All control characters 0x00-0x1F - Del DEL 0x7F - Dollar $ - Dot . or .. as entire string - DoubleQuote " - Hash # - InvalidUtf8 An invalid UTF-8 character (e.g. latin1) - LeftCrLfHtVt CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string - LeftPeriod . on the left of a string - LeftSpace SPACE on the left of a string - LeftTilde ~ on the left of a string - LtGt <, > - None No characters are encoded - Percent % - Pipe | - Question ? - RightCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string - RightPeriod . on the right of a string - RightSpace SPACE on the right of a string - SingleQuote ' - Slash / - SquareBracket [, ] + ---------------------------------------------------------------------------------- + Encoding Characters Encoded as + ---------------------- ------------------------ ---------------------------------- + Asterisk * * + + BackQuote ` ` + + BackSlash \ \ + + Colon : : + + CrLf CR 0x0D, LF 0x0A ␍, ␊ + + Ctl All control characters ␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟ + 0x00-0x1F + + Del DEL 0x7F ␡ + + Dollar $ $ + + Dot . or .. as entire string ., .. + + DoubleQuote " " + + Hash # # + + InvalidUtf8 An invalid UTF-8 � + character (e.g. latin1) + + LeftCrLfHtVt CR 0x0D, LF 0x0A, HT ␍, ␊, ␉, ␋ + 0x09, VT 0x0B on the + left of a string + + LeftPeriod . on the left of a . + string + + LeftSpace SPACE on the left of a ␠ + string + + LeftTilde ~ on the left of a ~ + string + + LtGt <, > <, > + + None No characters are + encoded + + Percent % % + + Pipe | | + + Question ? ? + + RightCrLfHtVt CR 0x0D, LF 0x0A, HT ␍, ␊, ␉, ␋ + 0x09, VT 0x0B on the + right of a string + + RightPeriod . on the right of a . + string + + RightSpace SPACE on the right of a ␠ + string + + Semicolon ; ; + + SingleQuote ' ' + + Slash / / + + SquareBracket [, ] [, ] + ---------------------------------------------------------------------------------- Encoding example: FTP @@ -12764,6 +13584,22 @@ which supports writing (W) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. +Metadata + +Backends may or may support reading or writing metadata. They may +support reading and writing system metadata (metadata intrinsic to that +backend) and/or user metadata (general purpose metadata). + +The levels of metadata support are + + Key Explanation + ----- ----------------------------------------------------------------- + R Read only System Metadata + RW Read and write System Metadata + RWU Read and write System Metadata and read and write User Metadata + +See the metadata docs for more info. + Optional Features All rclone remotes support a base command set. Other features depend @@ -12772,8 +13608,9 @@ upon backend-specific capabilities. Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- ---------- 1Fichier No Yes Yes No No No No Yes No Yes + Akamai Netstorage Yes No No No No Yes Yes No No Yes Amazon Drive Yes No Yes Yes No No No No No Yes - Amazon S3 No Yes No No Yes Yes Yes Yes No No + Amazon S3 (or S3 compatible) No Yes No No Yes Yes Yes Yes No No Backblaze B2 No Yes No No Yes Yes Yes Yes No No Box Yes Yes Yes Yes Yes ‡‡ No Yes Yes Yes Yes Citrix ShareFile Yes Yes Yes Yes No No Yes No No Yes @@ -12784,9 +13621,12 @@ upon backend-specific capabilities. Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Google Photos No No No No No No No No No No HDFS Yes No Yes Yes No No Yes No Yes Yes + HiDrive Yes Yes Yes Yes No No Yes No No Yes HTTP No No No No No No No No No Yes Hubic Yes † Yes No No No Yes Yes No Yes No + Internet Archive No Yes No No Yes Yes No Yes Yes No Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes + Koofr Yes Yes Yes Yes No No Yes Yes Yes Yes Mail.ru Cloud Yes Yes Yes Yes Yes No No Yes Yes Yes Mega Yes No Yes Yes Yes No No Yes Yes Yes Memory No Yes No No No Yes Yes No No No @@ -12800,6 +13640,7 @@ upon backend-specific capabilities. QingStor No Yes No No Yes Yes No No No No Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes SFTP No No Yes Yes No No Yes No Yes Yes + Sia No No No No No No Yes No No Yes SugarSync Yes Yes Yes Yes No No Yes Yes No Yes Storj Yes † No Yes No No Yes Yes No No No Uptobox No Yes Yes Yes No No No No No No @@ -12924,6 +13765,7 @@ These flags are available for every command. --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -12933,7 +13775,7 @@ These flags are available for every command. --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) - --exclude-if-present string Exclude directories if filename is present + --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --files-from stringArray Read list of source-file names from file (use - to read from stdin) @@ -12972,6 +13814,8 @@ These flags are available for every command. --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -13043,7 +13887,7 @@ These flags are available for every command. --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0") -v, --verbose count Print lots more stuff (repeat for more) Backend Flags @@ -13096,6 +13940,7 @@ and may be set in the config file. --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) + --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL @@ -13135,6 +13980,7 @@ and may be set in the config file. --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-upstreams SpaceSepList Upstreams for combining --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) @@ -13167,6 +14013,7 @@ and may be set in the config file. --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) + --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs @@ -13222,6 +14069,7 @@ and may be set in the config file. --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-host string FTP host to connect to @@ -13240,8 +14088,10 @@ and may be set in the config file. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret + --gcs-decompress If set this will decompress gzip encoded objects --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets + --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path @@ -13267,10 +14117,24 @@ and may be set in the config file. --hdfs-namenode string Hadoop name node and port --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name + --hidrive-auth-url string Auth server URL + --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) + --hidrive-client-id string OAuth Client Id + --hidrive-client-secret string OAuth Client Secret + --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary + --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) + --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") + --hidrive-root-prefix string The root/parent folder for all paths (default "/") + --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") + --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user") + --hidrive-token string OAuth Access Token as a JSON blob + --hidrive-token-url string Token server url + --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) + --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / - --http-url string URL of http host to connect to + --http-url string URL of HTTP host to connect to --hubic-auth-url string Auth server URL --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --hubic-client-id string OAuth Client Id @@ -13279,6 +14143,13 @@ and may be set in the config file. --hubic-no-chunk Don't chunk files during streaming upload --hubic-token string OAuth Access Token as a JSON blob --hubic-token-url string Token server url + --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) + --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) + --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") + --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") + --internetarchive-secret-access-key string IAS3 Secret Key (password) + --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -13300,7 +14171,7 @@ and may be set in the config file. --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads - --local-nounc string Disable UNC (long path names) conversion on Windows + --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) @@ -13321,11 +14192,11 @@ and may be set in the config file. --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) + --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret - --onedrive-disable-site-permission Disable the request for Sites.Read.All permission --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -13349,9 +14220,11 @@ and may be set in the config file. --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") + --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url + --pcloud-username string Your pcloud username --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID @@ -13404,6 +14277,7 @@ and may be set in the config file. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) + --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist @@ -13414,6 +14288,8 @@ and may be set in the config file. --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed + --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) + --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -13426,12 +14302,14 @@ and may be set in the config file. --sftp-known-hosts-file string Optional path to known_hosts file --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) - --sftp-path-override string Override path used by SSH connection + --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host + --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes + --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat @@ -13488,6 +14366,7 @@ and may be set in the config file. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token @@ -13499,7 +14378,7 @@ and may be set in the config file. --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using + --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret @@ -14116,7 +14995,7 @@ Command line syntax Arbitrary rclone flags may be specified on the bisync command line, for example -rclone bsync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s +rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet. @@ -14406,7 +15285,7 @@ Supported backends Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - -SFTP +SFTP - Yandex Disk It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test @@ -14741,7 +15620,7 @@ Google Doc files Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx extension, for -example), it's not possible to import a normal file back into a Google +example), it is not possible to import a normal file back into a Google document. Bisync's handling of Google Doc files is to flag them in the run log @@ -15232,7 +16111,7 @@ strings. Standard options -Here are the standard options specific to fichier (1Fichier). +Here are the Standard options specific to fichier (1Fichier). --fichier-api-key @@ -15247,7 +16126,7 @@ Properties: Advanced options -Here are the advanced options specific to fichier (1Fichier). +Here are the Advanced options specific to fichier (1Fichier). --fichier-shared-folder @@ -15308,7 +16187,7 @@ rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Alias @@ -15395,7 +16274,7 @@ Copy another local directory to the alias directory called source Standard options -Here are the standard options specific to alias (Alias for an existing +Here are the Standard options specific to alias (Alias for an existing remote). --alias-remote @@ -15562,7 +16441,7 @@ amazon.co.uk email and password should work here just fine. Standard options -Here are the standard options specific to amazon cloud drive (Amazon +Here are the Standard options specific to amazon cloud drive (Amazon Drive). --acd-client-id @@ -15593,7 +16472,7 @@ Properties: Advanced options -Here are the advanced options specific to amazon cloud drive (Amazon +Here are the Advanced options specific to amazon cloud drive (Amazon Drive). --acd-token @@ -15737,7 +16616,7 @@ without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Amazon S3 Storage Providers @@ -15746,9 +16625,14 @@ The S3 backend can be used with a number of different providers: - AWS S3 - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Ceph +- China Mobile Ecloud Elastic Object Storage (EOS) +- Cloudflare R2 +- Arvan Cloud Object Storage (AOS) - DigitalOcean Spaces - Dreamhost +- Huawei OBS - IBM COS S3 +- IDrive e2 - Minio - RackCorp Object Storage - Scaleway @@ -15803,7 +16687,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, Dreamhost, IBM COS, Minio, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -16303,10 +17187,11 @@ be uploaded as multipart. Standard options -Here are the standard options specific to s3 (Amazon S3 Compliant -Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, -Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent -COS). +Here are the Standard options specific to s3 (Amazon S3 Compliant +Storage Providers including AWS, Alibaba, Ceph, China Mobile, +Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, +IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, +StackPath, Storj, Tencent COS and Wasabi). --s3-provider @@ -16325,12 +17210,22 @@ Properties: - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage + - "ChinaMobile" + - China Mobile Ecloud Elastic Object Storage (EOS) + - "Cloudflare" + - Cloudflare R2 Storage + - "ArvanCloud" + - Arvan Cloud Object Storage (AOS) - "DigitalOcean" - Digital Ocean Spaces - "Dreamhost" - Dreamhost DreamObjects + - "HuaweiOBS" + - Huawei Object Storage Service - "IBMCOS" - IBM COS S3 + - "IDrive" + - IDrive e2 - "LyveCloud" - Seagate Lyve Cloud - "Minio" @@ -16556,6 +17451,68 @@ Properties: - Amsterdam, The Netherlands - "fr-par" - Paris, France + - "pl-waw" + - Warsaw, Poland + +--s3-region + +Region to connect to. - the location where your bucket will be created +and your data stored. Need bo be same with your endpoint. + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "af-south-1" + - AF-Johannesburg + - "ap-southeast-2" + - AP-Bangkok + - "ap-southeast-3" + - AP-Singapore + - "cn-east-3" + - CN East-Shanghai1 + - "cn-east-2" + - CN East-Shanghai2 + - "cn-north-1" + - CN North-Beijing1 + - "cn-north-4" + - CN North-Beijing4 + - "cn-south-1" + - CN South-Guangzhou + - "ap-southeast-1" + - CN-Hong Kong + - "sa-argentina-1" + - LA-Buenos Aires1 + - "sa-peru-1" + - LA-Lima1 + - "na-mexico-1" + - LA-Mexico City1 + - "sa-chile-1" + - LA-Santiago2 + - "sa-brazil-1" + - LA-Sao Paulo1 + - "ru-northwest-2" + - RU-Moscow2 + +--s3-region + +Region to connect to. + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Cloudflare +- Type: string +- Required: false +- Examples: + - "auto" + - R2 buckets are automatically distributed across Cloudflare's + data centers for low latency. --s3-region @@ -16567,7 +17524,8 @@ Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS +- Provider: + !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive - Type: string - Required: false - Examples: @@ -16594,6 +17552,98 @@ Properties: --s3-endpoint +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "eos-wuxi-1.cmecloud.cn" + - The default endpoint - a good choice if you are unsure. + - East China (Suzhou) + - "eos-jinan-1.cmecloud.cn" + - East China (Jinan) + - "eos-ningbo-1.cmecloud.cn" + - East China (Hangzhou) + - "eos-shanghai-1.cmecloud.cn" + - East China (Shanghai-1) + - "eos-zhengzhou-1.cmecloud.cn" + - Central China (Zhengzhou) + - "eos-hunan-1.cmecloud.cn" + - Central China (Changsha-1) + - "eos-zhuzhou-1.cmecloud.cn" + - Central China (Changsha-2) + - "eos-guangzhou-1.cmecloud.cn" + - South China (Guangzhou-2) + - "eos-dongguan-1.cmecloud.cn" + - South China (Guangzhou-3) + - "eos-beijing-1.cmecloud.cn" + - North China (Beijing-1) + - "eos-beijing-2.cmecloud.cn" + - North China (Beijing-2) + - "eos-beijing-4.cmecloud.cn" + - North China (Beijing-3) + - "eos-huhehaote-1.cmecloud.cn" + - North China (Huhehaote) + - "eos-chengdu-1.cmecloud.cn" + - Southwest China (Chengdu) + - "eos-chongqing-1.cmecloud.cn" + - Southwest China (Chongqing) + - "eos-guiyang-1.cmecloud.cn" + - Southwest China (Guiyang) + - "eos-xian-1.cmecloud.cn" + - Nouthwest China (Xian) + - "eos-yunnan.cmecloud.cn" + - Yunnan China (Kunming) + - "eos-yunnan-2.cmecloud.cn" + - Yunnan China (Kunming-2) + - "eos-tianjin-1.cmecloud.cn" + - Tianjin China (Tianjin) + - "eos-jilin-1.cmecloud.cn" + - Jilin China (Changchun) + - "eos-hubei-1.cmecloud.cn" + - Hubei China (Xiangyan) + - "eos-jiangxi-1.cmecloud.cn" + - Jiangxi China (Nanchang) + - "eos-gansu-1.cmecloud.cn" + - Gansu China (Lanzhou) + - "eos-shanxi-1.cmecloud.cn" + - Shanxi China (Taiyuan) + - "eos-liaoning-1.cmecloud.cn" + - Liaoning China (Shenyang) + - "eos-hebei-1.cmecloud.cn" + - Hebei China (Shijiazhuang) + - "eos-fujian-1.cmecloud.cn" + - Fujian China (Xiamen) + - "eos-guangxi-1.cmecloud.cn" + - Guangxi China (Nanning) + - "eos-anhui-1.cmecloud.cn" + - Anhui China (Huainan) + +--s3-endpoint + +Endpoint for Arvan Cloud Object Storage (AOS) API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "s3.ir-thr-at1.arvanstorage.com" + - The default endpoint - a good choice if you are unsure. + - Tehran Iran (Asiatech) + - "s3.ir-tbz-sh1.arvanstorage.com" + - Tabriz Iran (Shahriar) + +--s3-endpoint + Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. @@ -16796,6 +17846,49 @@ Properties: --s3-endpoint +Endpoint for OBS API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + +--s3-endpoint + Endpoint for Scaleway Object Storage. Properties: @@ -16810,6 +17903,8 @@ Properties: - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint --s3-endpoint @@ -16962,7 +18057,7 @@ Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Provider: - !AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp + !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp - Type: string - Required: false - Examples: @@ -16992,6 +18087,8 @@ Properties: - Wasabi AP Northeast 1 (Tokyo) endpoint - "s3.ap-northeast-2.wasabisys.com" - Wasabi AP Northeast 2 (Osaka) endpoint + - "s3.ir-thr-at1.arvanstorage.com" + - ArvanCloud Tehran Iran (Asiatech) endpoint --s3-location-constraint @@ -17060,6 +18157,100 @@ Properties: --s3-location-constraint +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "wuxi1" + - East China (Suzhou) + - "jinan1" + - East China (Jinan) + - "ningbo1" + - East China (Hangzhou) + - "shanghai1" + - East China (Shanghai-1) + - "zhengzhou1" + - Central China (Zhengzhou) + - "hunan1" + - Central China (Changsha-1) + - "zhuzhou1" + - Central China (Changsha-2) + - "guangzhou1" + - South China (Guangzhou-2) + - "dongguan1" + - South China (Guangzhou-3) + - "beijing1" + - North China (Beijing-1) + - "beijing2" + - North China (Beijing-2) + - "beijing4" + - North China (Beijing-3) + - "huhehaote1" + - North China (Huhehaote) + - "chengdu1" + - Southwest China (Chengdu) + - "chongqing1" + - Southwest China (Chongqing) + - "guiyang1" + - Southwest China (Guiyang) + - "xian1" + - Nouthwest China (Xian) + - "yunnan" + - Yunnan China (Kunming) + - "yunnan2" + - Yunnan China (Kunming-2) + - "tianjin1" + - Tianjin China (Tianjin) + - "jilin1" + - Jilin China (Changchun) + - "hubei1" + - Hubei China (Xiangyan) + - "jiangxi1" + - Jiangxi China (Nanchang) + - "gansu1" + - Gansu China (Lanzhou) + - "shanxi1" + - Shanxi China (Taiyuan) + - "liaoning1" + - Liaoning China (Shenyang) + - "hebei1" + - Hebei China (Shijiazhuang) + - "fujian1" + - Fujian China (Xiamen) + - "guangxi1" + - Guangxi China (Nanning) + - "anhui1" + - Anhui China (Huainan) + +--s3-location-constraint + +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "ir-thr-at1" + - Tehran Iran (Asiatech) + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + +--s3-location-constraint + Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter. @@ -17200,7 +18391,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT - Provider: - !AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS + !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS - Type: string - Required: false @@ -17221,7 +18412,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj +- Provider: !Storj,Cloudflare - Type: string - Required: false - Examples: @@ -17282,7 +18473,7 @@ Properties: - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17364,6 +18555,42 @@ Properties: --s3-storage-class +The storage class to use when storing new objects in ChinaMobile. + +Properties: + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "" + - Default + - "STANDARD" + - Standard storage class + - "GLACIER" + - Archive storage mode + - "STANDARD_IA" + - Infrequent access storage mode + +--s3-storage-class + +The storage class to use when storing new objects in ArvanCloud. + +Properties: + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "STANDARD" + - Standard storage class + +--s3-storage-class + The storage class to use when storing new objects in Tencent COS. Properties: @@ -17407,10 +18634,11 @@ Properties: Advanced options -Here are the advanced options specific to s3 (Amazon S3 Compliant -Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, -Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent -COS). +Here are the Advanced options specific to s3 (Amazon S3 Compliant +Storage Providers including AWS, Alibaba, Ceph, China Mobile, +Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, +IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, +StackPath, Storj, Tencent COS and Wasabi). --s3-bucket-acl @@ -17464,7 +18692,7 @@ Properties: - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17482,7 +18710,7 @@ Properties: - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17501,7 +18729,7 @@ Properties: - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 -- Provider: AWS,Ceph,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -17546,6 +18774,12 @@ this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size. +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with "-P" flag. Rclone treats chunk as sent when +it's buffered by the AWS SDK, when in fact it may still be uploading. A +bigger chunk size means a bigger AWS SDK buffer and progress reporting +more deviating from the truth. + Properties: - Config: chunk_size @@ -17946,6 +19180,61 @@ Properties: - Type: Tristate - Default: unset +--s3-use-presigned-request + +Whether to use a presigned request or PutObject for single part uploads + +If this is false rclone will use PutObject from the AWS SDK to upload an +object. + +Versions of rclone < 1.59 use presigned requests to upload a single part +object and setting this flag to true will re-enable that functionality. +This shouldn't be necessary except in exceptional circumstances or for +testing. + +Properties: + +- Config: use_presigned_request +- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST +- Type: bool +- Default: false + +Metadata + +User metadata is stored as x-amz-meta- keys. S3 metadata keys are case +insensitive and are always returned in lower case. + +Here are the possible system metadata items for the s3 backend. + + ------------------------------------------------------------------------------------------------------------------ + Name Help Type Example Read Only + --------------------- --------------------- ----------- ------------------------------------- -------------------- + btime Time of file birth RFC 3339 2006-01-02T15:04:05.999999999Z07:00 Y + (creation) read from + Last-Modified header + + cache-control Cache-Control header string no-cache N + + content-disposition Content-Disposition string inline N + header + + content-encoding Content-Encoding string gzip N + header + + content-language Content-Language string en-US N + header + + content-type Content-Type header string text/plain N + + mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z07:00 N + modification, read + from rclone metadata + + tier Tier of the object string GLACIER Y + ------------------------------------------------------------------------------------------------------------------ + +See the metadata docs for more info. + Backend commands Here are the commands specific to the s3 backend. @@ -17956,8 +19245,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -18102,9 +19391,14 @@ AWS Snowball is a hardware appliance used for transferring bulk data back to AWS. Its main software interface is S3 object storage. To use rclone with AWS Snowball Edge devices, configure as standard for -an 'S3 Compatible Service' be sure to set upload_cutoff = 0 otherwise -you will run into authentication header issues as the snowball device -does not support query parameter based authentication. +an 'S3 Compatible Service'. + +If using rclone pre v1.59 be sure to set upload_cutoff = 0 otherwise you +will run into authentication header issues as the snowball device does +not support query parameter based authentication. + +With rclone v1.59 or later setting upload_cutoff should not be +necessary. eg. @@ -18139,10 +19433,10 @@ config: server_side_encryption = storage_class = -If you are using an older version of CEPH, e.g. 10.2.x Jewel, then you -may need to supply the parameter --s3-upload-cutoff 0 or put this in the -config file as upload_cutoff 0 to work around a bug which causes -uploading of small files to fail. +If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a +version of rclone before v1.59 then you may need to supply the parameter +--s3-upload-cutoff 0 or put this in the config file as upload_cutoff 0 +to work around a bug which causes uploading of small files to fail. Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get @@ -18167,6 +19461,101 @@ removed). Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine. +Cloudflare R2 + +Cloudflare R2 Storage allows developers to store large amounts of +unstructured data without the costly egress bandwidth fees associated +with typical cloud storage services. + +Here is an example of making a Cloudflare R2 configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + +Note that all buckets are private, and all are stored in the same "auto" +region. It is necessary to use Cloudflare workers to share the content +of a bucket publicly. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> r2 + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + ... + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) + ... + Storage> s3 + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + ... + XX / Cloudflare R2 Storage + \ (Cloudflare) + ... + provider> Cloudflare + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> 1 + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_ACCESS_KEY + Option region. + Region to connect to. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency. + \ (auto) + region> 1 + Option endpoint. + Endpoint for S3 API. + Required when using an S3 clone. + Enter a value. Press Enter to leave empty. + endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com + Edit advanced config? + y) Yes + n) No (default) + y/n> n + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +This will leave your config looking something like: + + [r2] + type = s3 + provider = Cloudflare + access_key_id = ACCESS_KEY + secret_access_key = SECRET_ACCESS_KEY + region = auto + endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com + acl = private + +Now run rclone lsf r2: to see your buckets and rclone lsf r2:bucket to +look within a bucket. + Dreamhost Dreamhost DreamObjects is an object storage system based on CEPH. @@ -18237,6 +19626,135 @@ example: rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space +Huawei OBS + +Object Storage Service (OBS) provides stable, secure, efficient, and +easy-to-use cloud storage that lets you store virtually any volume of +unstructured data in any format and access it from anywhere. + +OBS provides an S3 interface, you can copy and modify the following +configuration and add it to your rclone configuration file. + + [obs] + type = s3 + provider = HuaweiOBS + access_key_id = your-access-key-id + secret_access_key = your-secret-access-key + region = af-south-1 + endpoint = obs.af-south-1.myhuaweicloud.com + acl = private + +Or you can also configure via the interactive command line: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> obs + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) + [snip] + Storage> 5 + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + 9 / Huawei Object Storage Service + \ (HuaweiOBS) + [snip] + provider> 9 + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> 1 + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> your-access-key-id + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> your-secret-access-key + Option region. + Region to connect to. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / AF-Johannesburg + \ (af-south-1) + 2 / AP-Bangkok + \ (ap-southeast-2) + [snip] + region> 1 + Option endpoint. + Endpoint for OBS API. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / AF-Johannesburg + \ (obs.af-south-1.myhuaweicloud.com) + 2 / AP-Bangkok + \ (obs.ap-southeast-2.myhuaweicloud.com) + [snip] + endpoint> 1 + Option acl. + Canned ACL used when creating buckets and storing or copying objects. + This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + [snip] + acl> 1 + Edit advanced config? + y) Yes + n) No (default) + y/n> + -------------------- + [obs] + type = s3 + provider = HuaweiOBS + access_key_id = your-access-key-id + secret_access_key = your-secret-access-key + region = af-south-1 + endpoint = obs.af-south-1.myhuaweicloud.com + acl = private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + obs s3 + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> q + IBM COS (S3) Information stored with IBM Cloud Object Storage is encrypted and @@ -18268,12 +19786,12 @@ To configure access to IBM COS S3, follow the steps below: \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] - 23 / http Connection + 23 / HTTP \ "http" Storage> 3 @@ -18408,6 +19926,113 @@ To configure access to IBM COS S3, follow the steps below: 6) Delete a file on remote. rclone delete IBM-COS-XREGION:newbucket/file.txt +IDrive e2 + +Here is an example of making an IDrive e2 configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> e2 + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \ (s3) + [snip] + Storage> s3 + + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + XX / IDrive e2 + \ (IDrive) + [snip] + provider> IDrive + + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> + + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> YOUR_ACCESS_KEY + + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> YOUR_SECRET_KEY + + Option acl. + Canned ACL used when creating buckets and storing or copying objects. + This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. + 5 | Bucket owner gets READ access. + | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-read) + / Both the object owner and the bucket owner get FULL_CONTROL over the object. + 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ (bucket-owner-full-control) + acl> + + Edit advanced config? + y) Yes + n) No (default) + y/n> + + Configuration complete. + Options: + - type: s3 + - provider: IDrive + - access_key_id: YOUR_ACCESS_KEY + - secret_access_key: YOUR_SECRET_KEY + - endpoint: q9d9.la12.idrivee2-5.com + Keep this "e2" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Minio Minio is an object storage server built for cloud application developers @@ -18517,6 +20142,14 @@ rclone like this: server_side_encryption = storage_class = +C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway +and it works the same way as on S3 by accepting the "GLACIER" +storage_class. So you can configure your remote with the +storage_class = GLACIER option to upload directly to C14. Don't forget +that in this state you can't read files back after, you will need to +restore them to "STANDARD" storage_class first before being able to read +them (see "restore" section above) + Seagate Lyve Cloud Seagate Lyve Cloud is an S3 compatible object storage platform from @@ -18539,7 +20172,7 @@ Choose s3 backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS \ (s3) [snip] Storage> s3 @@ -18703,7 +20336,7 @@ rclone like this. Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / Amazon S3 (also Dreamhost, Ceph, Minio) + XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) \ "s3" [snip] Storage> s3 @@ -18813,7 +20446,7 @@ This will guide you through an interactive setup process. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -18902,6 +20535,360 @@ This will guide you through an interactive setup process. d) Delete this remote y/e/d> y +China Mobile Ecloud Elastic Object Storage (EOS) + +Here is an example of making an China Mobile Ecloud Elastic Object +Storage (EOS) configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> ChinaMobile + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + ... + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + \ (s3) + ... + Storage> s3 + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + ... + 4 / China Mobile Ecloud Elastic Object Storage (EOS) + \ (ChinaMobile) + ... + provider> ChinaMobile + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> accesskeyid + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> secretaccesskey + Option endpoint. + Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / The default endpoint - a good choice if you are unsure. + 1 | East China (Suzhou) + \ (eos-wuxi-1.cmecloud.cn) + 2 / East China (Jinan) + \ (eos-jinan-1.cmecloud.cn) + 3 / East China (Hangzhou) + \ (eos-ningbo-1.cmecloud.cn) + 4 / East China (Shanghai-1) + \ (eos-shanghai-1.cmecloud.cn) + 5 / Central China (Zhengzhou) + \ (eos-zhengzhou-1.cmecloud.cn) + 6 / Central China (Changsha-1) + \ (eos-hunan-1.cmecloud.cn) + 7 / Central China (Changsha-2) + \ (eos-zhuzhou-1.cmecloud.cn) + 8 / South China (Guangzhou-2) + \ (eos-guangzhou-1.cmecloud.cn) + 9 / South China (Guangzhou-3) + \ (eos-dongguan-1.cmecloud.cn) + 10 / North China (Beijing-1) + \ (eos-beijing-1.cmecloud.cn) + 11 / North China (Beijing-2) + \ (eos-beijing-2.cmecloud.cn) + 12 / North China (Beijing-3) + \ (eos-beijing-4.cmecloud.cn) + 13 / North China (Huhehaote) + \ (eos-huhehaote-1.cmecloud.cn) + 14 / Southwest China (Chengdu) + \ (eos-chengdu-1.cmecloud.cn) + 15 / Southwest China (Chongqing) + \ (eos-chongqing-1.cmecloud.cn) + 16 / Southwest China (Guiyang) + \ (eos-guiyang-1.cmecloud.cn) + 17 / Nouthwest China (Xian) + \ (eos-xian-1.cmecloud.cn) + 18 / Yunnan China (Kunming) + \ (eos-yunnan.cmecloud.cn) + 19 / Yunnan China (Kunming-2) + \ (eos-yunnan-2.cmecloud.cn) + 20 / Tianjin China (Tianjin) + \ (eos-tianjin-1.cmecloud.cn) + 21 / Jilin China (Changchun) + \ (eos-jilin-1.cmecloud.cn) + 22 / Hubei China (Xiangyan) + \ (eos-hubei-1.cmecloud.cn) + 23 / Jiangxi China (Nanchang) + \ (eos-jiangxi-1.cmecloud.cn) + 24 / Gansu China (Lanzhou) + \ (eos-gansu-1.cmecloud.cn) + 25 / Shanxi China (Taiyuan) + \ (eos-shanxi-1.cmecloud.cn) + 26 / Liaoning China (Shenyang) + \ (eos-liaoning-1.cmecloud.cn) + 27 / Hebei China (Shijiazhuang) + \ (eos-hebei-1.cmecloud.cn) + 28 / Fujian China (Xiamen) + \ (eos-fujian-1.cmecloud.cn) + 29 / Guangxi China (Nanning) + \ (eos-guangxi-1.cmecloud.cn) + 30 / Anhui China (Huainan) + \ (eos-anhui-1.cmecloud.cn) + endpoint> 1 + Option location_constraint. + Location constraint - must match endpoint. + Used when creating buckets only. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China (Suzhou) + \ (wuxi1) + 2 / East China (Jinan) + \ (jinan1) + 3 / East China (Hangzhou) + \ (ningbo1) + 4 / East China (Shanghai-1) + \ (shanghai1) + 5 / Central China (Zhengzhou) + \ (zhengzhou1) + 6 / Central China (Changsha-1) + \ (hunan1) + 7 / Central China (Changsha-2) + \ (zhuzhou1) + 8 / South China (Guangzhou-2) + \ (guangzhou1) + 9 / South China (Guangzhou-3) + \ (dongguan1) + 10 / North China (Beijing-1) + \ (beijing1) + 11 / North China (Beijing-2) + \ (beijing2) + 12 / North China (Beijing-3) + \ (beijing4) + 13 / North China (Huhehaote) + \ (huhehaote1) + 14 / Southwest China (Chengdu) + \ (chengdu1) + 15 / Southwest China (Chongqing) + \ (chongqing1) + 16 / Southwest China (Guiyang) + \ (guiyang1) + 17 / Nouthwest China (Xian) + \ (xian1) + 18 / Yunnan China (Kunming) + \ (yunnan) + 19 / Yunnan China (Kunming-2) + \ (yunnan2) + 20 / Tianjin China (Tianjin) + \ (tianjin1) + 21 / Jilin China (Changchun) + \ (jilin1) + 22 / Hubei China (Xiangyan) + \ (hubei1) + 23 / Jiangxi China (Nanchang) + \ (jiangxi1) + 24 / Gansu China (Lanzhou) + \ (gansu1) + 25 / Shanxi China (Taiyuan) + \ (shanxi1) + 26 / Liaoning China (Shenyang) + \ (liaoning1) + 27 / Hebei China (Shijiazhuang) + \ (hebei1) + 28 / Fujian China (Xiamen) + \ (fujian1) + 29 / Guangxi China (Nanning) + \ (guangxi1) + 30 / Anhui China (Huainan) + \ (anhui1) + location_constraint> 1 + Option acl. + Canned ACL used when creating buckets and storing or copying objects. + This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \ (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \ (authenticated-read) + / Object owner gets FULL_CONTROL. + acl> private + Option server_side_encryption. + The server-side encryption algorithm used when storing this object in S3. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / None + \ () + 2 / AES256 + \ (AES256) + server_side_encryption> + Option storage_class. + The storage class to use when storing new objects in ChinaMobile. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Default + \ () + 2 / Standard storage class + \ (STANDARD) + 3 / Archive storage mode + \ (GLACIER) + 4 / Infrequent access storage mode + \ (STANDARD_IA) + storage_class> + Edit advanced config? + y) Yes + n) No (default) + y/n> n + -------------------- + [ChinaMobile] + type = s3 + provider = ChinaMobile + access_key_id = accesskeyid + secret_access_key = secretaccesskey + endpoint = eos-wuxi-1.cmecloud.cn + location_constraint = wuxi1 + acl = private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +ArvanCloud + +ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional +file storage. It gives you access to backup and archived files and +allows sharing. Files like profile image in the app, images sent by +users or scanned documents can be stored securely and easily in our +Object Storage service. + +ArvanCloud provides an S3 interface which can be configured for use with +rclone like this. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + n/s> n + name> ArvanCloud + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) + \ "s3" + [snip] + Storage> s3 + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> YOURACCESSKEY + AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> YOURSECRETACCESSKEY + Region to connect to. + Choose a number from below, or type in your own value + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia, or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" + [snip] + region> + Endpoint for S3 API. + Leave blank if using ArvanCloud to use the default endpoint for the region. + Specify if using an S3 clone such as Ceph. + endpoint> s3.arvanstorage.com + Location constraint - must be set to match the Region. Used when creating buckets only. + Choose a number from below, or type in your own value + 1 / Empty for Iran-Tehran Region. + \ "" + [snip] + location_constraint> + Canned ACL used when creating buckets and/or storing objects in S3. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" + [snip] + acl> + The server-side encryption algorithm used when storing this object in S3. + Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" + server_side_encryption> + The storage class to use when storing objects in S3. + Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" + storage_class> + Remote config + -------------------- + [ArvanCloud] + env_auth = false + access_key_id = YOURACCESSKEY + secret_access_key = YOURSECRETACCESSKEY + region = ir-thr-at1 + endpoint = s3.arvanstorage.com + location_constraint = + acl = + server_side_encryption = + storage_class = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This will leave the config file looking like this. + + [ArvanCloud] + type = s3 + provider = ArvanCloud + env_auth = false + access_key_id = YOURACCESSKEY + secret_access_key = YOURSECRETACCESSKEY + region = + endpoint = s3.arvanstorage.com + location_constraint = + acl = + server_side_encryption = + storage_class = + Tencent COS Tencent Cloud Object Storage (COS) is a distributed storage service @@ -18932,7 +20919,7 @@ To configure access to Tencent COS, follow the steps below: \ "alias" 3 / Amazon Drive \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -19136,7 +21123,7 @@ rclone about is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Backblaze B2 @@ -19303,6 +21290,11 @@ remove the file instead of hiding it. Old versions of files, where available, are visible using the --b2-versions flag. +It is also possible to view a bucket as it was at a certain point in +time, using the --b2-version-at flag. This will show the file versions +as they were at that time, showing files that have been deleted +afterwards, and hiding files that were created since. + If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply @@ -19428,7 +21420,7 @@ you can then use the authorization token (the part of the url from the Standard options -Here are the standard options specific to b2 (Backblaze B2). +Here are the Standard options specific to b2 (Backblaze B2). --b2-account @@ -19465,7 +21457,7 @@ Properties: Advanced options -Here are the advanced options specific to b2 (Backblaze B2). +Here are the Advanced options specific to b2 (Backblaze B2). --b2-endpoint @@ -19515,6 +21507,20 @@ Properties: - Type: bool - Default: false +--b2-version-at + +Show file versions as they were at the specified time. + +Note that when using this no file write operations are permitted, so you +can't upload files or delete them. + +Properties: + +- Config: version_at +- Env Var: RCLONE_B2_VERSION_AT +- Type: Time +- Default: off + --b2-upload-cutoff Cutoff for switching to chunked upload. @@ -19664,7 +21670,7 @@ rclone about is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Box @@ -19920,7 +21926,7 @@ https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use Standard options -Here are the standard options specific to box (Box). +Here are the Standard options specific to box (Box). --box-client-id @@ -19993,7 +21999,7 @@ Properties: Advanced options -Here are the advanced options specific to box (Box). +Here are the Advanced options specific to box (Box). --box-token @@ -20115,7 +22121,7 @@ rclone about is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Cache (DEPRECATED) @@ -20434,7 +22440,7 @@ delete cached data (chunks) as well (optional, false by default) Standard options -Here are the standard options specific to cache (Cache a remote). +Here are the Standard options specific to cache (Cache a remote). --cache-remote @@ -20551,7 +22557,7 @@ Properties: Advanced options -Here are the advanced options specific to cache (Cache a remote). +Here are the Advanced options specific to cache (Cache a remote). --cache-plex-token @@ -20800,8 +22806,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -21115,7 +23121,7 @@ Changing transactions is dangerous and requires explicit migration. Standard options -Here are the standard options specific to chunker (Transparently +Here are the Standard options specific to chunker (Transparently chunk/split large files). --chunker-remote @@ -21176,7 +23182,7 @@ Properties: Advanced options -Here are the advanced options specific to chunker (Transparently +Here are the Advanced options specific to chunker (Transparently chunk/split large files). --chunker-name-format @@ -21425,7 +23431,7 @@ strings. Standard options -Here are the standard options specific to sharefile (Citrix Sharefile). +Here are the Standard options specific to sharefile (Citrix Sharefile). --sharefile-root-folder-id @@ -21455,7 +23461,7 @@ Properties: Advanced options -Here are the advanced options specific to sharefile (Citrix Sharefile). +Here are the Advanced options specific to sharefile (Citrix Sharefile). --sharefile-upload-cutoff @@ -21526,7 +23532,7 @@ without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Crypt @@ -21923,7 +23929,7 @@ remote instead of rclone check which can't check the checksums properly. Standard options -Here are the standard options specific to crypt (Encrypt/Decrypt a +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). --crypt-remote @@ -22008,7 +24014,7 @@ Properties: Advanced options -Here are the advanced options specific to crypt (Encrypt/Decrypt a +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). --crypt-server-side-across-configs @@ -22091,6 +24097,12 @@ Properties: - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive) +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. + Backend commands Here are the commands specific to the crypt backend. @@ -22101,8 +24113,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -22358,7 +24370,7 @@ compression backend. Standard options -Here are the standard options specific to compress (Compress a remote). +Here are the Standard options specific to compress (Compress a remote). --compress-remote @@ -22387,7 +24399,7 @@ Properties: Advanced options -Here are the advanced options specific to compress (Compress a remote). +Here are the Advanced options specific to compress (Compress a remote). --compress-level @@ -22422,6 +24434,157 @@ Properties: - Type: SizeSuffix - Default: 20Mi +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. + +Combine + +The combine backend joins remotes together into a single directory tree. + +For example you might have a remote for images on one provider: + + $ rclone tree s3:imagesbucket + / + ├── image1.jpg + └── image2.jpg + +And a remote for files on another: + + $ rclone tree drive:important/files + / + ├── file1.txt + └── file2.txt + +The combine backend can join these together into a synthetic directory +structure like this: + + $ rclone tree combined: + / + ├── files + │ ├── file1.txt + │ └── file2.txt + └── images + ├── image1.jpg + └── image2.jpg + +You'd do this by specifying an upstreams parameter in the config like +this + + upstreams = images=s3:imagesbucket files=drive:important/files + +During the initial setup with rclone config you will specify the +upstreams remotes as a space separated list. The upstream remotes can +either be a local paths or other remotes. + +Configuration + +Here is an example of how to make a combine called remote for the +example above. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + ... + XX / Combine several remotes into one + \ (combine) + ... + Storage> combine + Option upstreams. + Upstreams for combining + These should be in the form + dir=remote:path dir2=remote2:path + Where before the = is specified the root directory and after is the remote to + put there. + Embedded spaces can be added using quotes + "dir=remote:path with space" "dir2=remote2:path with space" + Enter a fs.SpaceSepList value. + upstreams> images=s3:imagesbucket files=drive:important/files + -------------------- + [remote] + type = combine + upstreams = images=s3:imagesbucket files=drive:important/files + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +Configuring for Google Drive Shared Drives + +Rclone has a convenience feature for making a combine backend for all +the shared drives you have access to. + +Assuming your main (non shared drive) Google drive remote is called +drive: you would run + + rclone backend -o config drives drive: + +This would produce something like this: + + [My Drive] + type = alias + remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + + [Test Drive] + type = alias + remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + + [AllDrives] + type = combine + remote = "My Drive=My Drive:" "Test Drive=Test Drive:" + +If you then add that config to your config file (find it with +rclone config file) then you can access all the shared drives in one +place with the AllDrives: remote. + +See the Google Drive docs for full info. + +Standard options + +Here are the Standard options specific to combine (Combine several +remotes into one). + +--combine-upstreams + +Upstreams for combining + +These should be in the form + + dir=remote:path dir2=remote2:path + +Where before the = is specified the root directory and after is the +remote to put there. + +Embedded spaces can be added using quotes + + "dir=remote:path with space" "dir2=remote2:path with space" + +Properties: + +- Config: upstreams +- Env Var: RCLONE_COMBINE_UPSTREAMS +- Type: SpaceSepList +- Default: + +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. + Dropbox Paths are specified as remote:path @@ -22592,7 +24755,7 @@ finishes up the last batch using this mode. Standard options -Here are the standard options specific to dropbox (Dropbox). +Here are the Standard options specific to dropbox (Dropbox). --dropbox-client-id @@ -22622,7 +24785,7 @@ Properties: Advanced options -Here are the advanced options specific to dropbox (Dropbox). +Here are the Advanced options specific to dropbox (Dropbox). --dropbox-token @@ -23045,7 +25208,7 @@ The ID for "S3 Storage" would be 120673761. Standard options -Here are the standard options specific to filefabric (Enterprise File +Here are the Standard options specific to filefabric (Enterprise File Fabric). --filefabric-url @@ -23104,7 +25267,7 @@ Properties: Advanced options -Here are the advanced options specific to filefabric (Enterprise File +Here are the Advanced options specific to filefabric (Enterprise File Fabric). --filefabric-token @@ -23196,7 +25359,7 @@ address as password. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - XX / FTP Connection + XX / FTP \ "ftp" [snip] Storage> ftp @@ -23292,7 +25455,7 @@ VsFTPd. Just hit a selection number when prompted. Standard options -Here are the standard options specific to ftp (FTP Connection). +Here are the Standard options specific to ftp (FTP). --ftp-host @@ -23375,7 +25538,7 @@ Properties: Advanced options -Here are the advanced options specific to ftp (FTP Connection). +Here are the Advanced options specific to ftp (FTP). --ftp-concurrency @@ -23421,6 +25584,17 @@ Properties: - Type: bool - Default: false +--ftp-disable-utf8 + +Disable using UTF-8 even if server advertises support. + +Properties: + +- Config: disable_utf8 +- Env Var: RCLONE_FTP_DISABLE_UTF8 +- Type: bool +- Default: false + --ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk) @@ -23545,7 +25719,7 @@ rclone about is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about The implementation of : --dump headers, --dump bodies, --dump auth for debugging isn't the same as for rclone HTTP based backends - it has less @@ -23845,7 +26019,7 @@ strings. Standard options -Here are the standard options specific to google cloud storage (Google +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). --gcs-client-id @@ -24123,7 +26297,7 @@ Properties: Advanced options -Here are the advanced options specific to google cloud storage (Google +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). --gcs-token @@ -24163,6 +26337,40 @@ Properties: - Type: string - Required: false +--gcs-no-check-bucket + +If set, don't attempt to check the bucket exists or create it. + +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. + +Properties: + +- Config: no_check_bucket +- Env Var: RCLONE_GCS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +--gcs-decompress + +If set this will decompress gzip encoded objects. + +It is possible to upload objects to GCS with "Content-Encoding: gzip" +set. Normally rclone will download these files files as compressed +objects. + +If this flag is set then rclone will decompress these files with +"Content-Encoding: gzip" as they are received. This means that rclone +can't check the size and hash but the file contents will be +decompressed. + +Properties: + +- Config: decompress +- Env Var: RCLONE_GCS_DECOMPRESS +- Type: bool +- Default: false + --gcs-encoding The encoding for the backend. @@ -24183,7 +26391,7 @@ Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Google Drive @@ -24240,8 +26448,6 @@ This will guide you through an interactive setup process: 5 | does not allow any access to read or download file content. \ "drive.metadata.readonly" scope> 1 - ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs). - root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config @@ -24341,9 +26547,9 @@ directories. Root folder ID -You can set the root_folder_id for rclone. This is the directory -(identified by its Folder ID) that rclone considers to be the root of -your drive. +This option has been moved to the advanced section. You can set the +root_folder_id for rclone. This is the directory (identified by its +Folder ID) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. @@ -24697,9 +26903,13 @@ represent the currently available conversions. -------------------------------------------------------------------------------------------------------------------------- Extension Mime Type Description ------------------- --------------------------------------------------------------------------- -------------------------- + bmp image/bmp Windows Bitmap format + csv text/csv Standard CSV format for Spreadsheets + doc application/msword Classic Word file + docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document epub application/epub+zip E-book format @@ -24708,7 +26918,8 @@ represent the currently available conversions. jpg image/jpeg A JPEG Image File - json application/vnd.google-apps.script+json JSON Text Format + json application/vnd.google-apps.script+json JSON Text Format for + Google Apps scripts odp application/vnd.oasis.opendocument.presentation Openoffice Presentation @@ -24720,6 +26931,8 @@ represent the currently available conversions. pdf application/pdf Adobe PDF Format + pjpeg image/pjpeg Progressive JPEG Image + png image/png PNG Image Format pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office @@ -24735,6 +26948,10 @@ represent the currently available conversions. txt text/plain Plain Text + wmf application/x-msmetafile Windows Meta File + + xls application/vnd.ms-excel Classic Excel file + xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet @@ -24757,7 +26974,7 @@ Documents. Standard options -Here are the standard options specific to drive (Google Drive). +Here are the Standard options specific to drive (Google Drive). --drive-client-id @@ -24813,20 +27030,6 @@ Properties: - Allows read-only access to file metadata but - does not allow any access to read or download file content. ---drive-root-folder-id - -ID of the root folder. Leave blank normally. - -Fill in to access "Computers" folders (see docs), or for rclone to use a -non root folder as its starting point. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -- Type: string -- Required: false - --drive-service-account-file Service Account Credentials JSON file path. @@ -24857,7 +27060,7 @@ Properties: Advanced options -Here are the advanced options specific to drive (Google Drive). +Here are the Advanced options specific to drive (Google Drive). --drive-token @@ -24896,6 +27099,20 @@ Properties: - Type: string - Required: false +--drive-root-folder-id + +ID of the root folder. Leave blank normally. + +Fill in to access "Computers" folders (see docs), or for rclone to use a +non root folder as its starting point. + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID +- Type: string +- Required: false + --drive-service-account-credentials Service Account Credentials JSON blob. @@ -25372,6 +27589,33 @@ Properties: - Type: bool - Default: false +--drive-resource-key + +Resource key for accessing a link-shared file. + +If you need to access files shared with a link like this + + https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing + +Then you will need to use the first part "XXX" as the "root_folder_id" +and the second part "YYY" as the "resource_key" otherwise you will get +404 not found errors when trying to access the directory. + +See: https://developers.google.com/drive/api/guides/resource-keys + +This resource key requirement only applies to a subset of old files. + +Note also that opening the folder once in the web interface (with the +user you've authenticated rclone with) seems to be enough so that the +resource key is no needed. + +Properties: + +- Config: resource_key +- Env Var: RCLONE_DRIVE_RESOURCE_KEY +- Type: string +- Required: false + --drive-encoding The encoding for the backend. @@ -25395,8 +27639,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -25496,7 +27740,7 @@ This will return a JSON list of objects like this With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives -found. +found and a combined drive. [My Drive] type = alias @@ -25506,9 +27750,15 @@ found. type = alias remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + [AllDrives] + type = combine + remote = "My Drive=My Drive:" "Test Drive=Test Drive:" + Adding this to the rclone config file will cause those team drives to be -accessible with the aliases shown. This may require manual editing of -the names. +accessible with the aliases shown. Any illegal charactes will be +substituted with "_" and duplicate names will have numbers suffixed. It +will also add a remote called AllDrives which shows all the shared +drives combined into one directory tree. untrash @@ -25562,6 +27812,18 @@ attempted if possible. Use the -i flag to see what would be copied before copying. +exportformats + +Dump the export formats for debug purposes + + rclone backend exportformats remote: [options] [+] + +importformats + +Dump the import formats for debug purposes + + rclone backend importformats remote: [options] [+] + Limitations Drive has quite a lot of rate limiting. This causes rclone to be limited @@ -25576,8 +27838,10 @@ upload the files if you prefer. Limitations of Google Docs -Google docs will appear as size -1 in rclone ls and as size 0 in -anything which uses the VFS layer, e.g. rclone mount, rclone serve. +Google docs will appear as size -1 in rclone ls, rclone ncdu etc, and as +size 0 in anything which uses the VFS layer, e.g. rclone mount and +rclone serve. When calculating directory totals, e.g. in rclone size and +rclone ncdu, they will be counted in as empty files. This is because rclone can't find out the size of the Google docs without downloading them. @@ -25661,8 +27925,9 @@ Here is how to create your own Google Drive client ID for rclone: Click again on "Credentials" on the left panel to go back to the "Credentials" screen. -(PS: if you are a GSuite user, you could also select "Internal" instead -of "External" above, but this has not been tested/documented so far). + (PS: if you are a GSuite user, you could also select "Internal" + instead of "External" above, but this will restrict API use to + Google Workspace users in your organisation). 6. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". @@ -25673,13 +27938,17 @@ of "External" above, but this has not been tested/documented so far). 8. It will show you a client ID and client secret. Make a note of these. + (If you selected "External" at Step 5 continue to "Publish App" in + the Steps 9 and 10. If you chose "Internal" you don't need to + publish and can skip straight to Step 11.) + 9. Go to "Oauth consent screen" and press "Publish App" -10. Provide the noted client ID and client secret to rclone. - -11. Click "OAuth consent screen", then click "PUBLISH APP" button and +10. Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users". +11. Provide the noted client ID and client secret to rclone. + Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in @@ -25911,7 +28180,7 @@ is similar to the Sharing tab in the Google Photos web interface. Standard options -Here are the standard options specific to google photos (Google Photos). +Here are the Standard options specific to google photos (Google Photos). --gphotos-client-id @@ -25955,7 +28224,7 @@ Properties: Advanced options -Here are the advanced options specific to google photos (Google Photos). +Here are the Advanced options specific to google photos (Google Photos). --gphotos-token @@ -26311,7 +28580,7 @@ Configuration reference Standard options -Here are the standard options specific to hasher (Better checksums for +Here are the Standard options specific to hasher (Better checksums for other remotes). --hasher-remote @@ -26350,7 +28619,7 @@ Properties: Advanced options -Here are the advanced options specific to hasher (Better checksums for +Here are the Advanced options specific to hasher (Better checksums for other remotes). --hasher-auto-size @@ -26365,6 +28634,12 @@ Properties: - Type: SizeSuffix - Default: 0 +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. + Backend commands Here are the commands specific to the hasher backend. @@ -26375,8 +28650,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -26617,7 +28892,7 @@ Invalid UTF-8 bytes will also be replaced. Standard options -Here are the standard options specific to hdfs (Hadoop distributed file +Here are the Standard options specific to hdfs (Hadoop distributed file system). --hdfs-namenode @@ -26649,7 +28924,7 @@ Properties: Advanced options -Here are the advanced options specific to hdfs (Hadoop distributed file +Here are the Advanced options specific to hdfs (Hadoop distributed file system). --hdfs-service-principal-name @@ -26704,6 +28979,460 @@ Limitations - No server-side Move or DirMove. - Checksums not implemented. +HiDrive + +Paths are specified as remote:path + +Paths may be as deep as required, e.g. remote:directory/subdirectory. + +The initial setup for hidrive involves getting a token from HiDrive +which you need to do in your browser. rclone config walks you through +it. + +Configuration + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / HiDrive + \ "hidrive" + [snip] + Storage> hidrive + OAuth Client Id - Leave blank normally. + client_id> + OAuth Client Secret - Leave blank normally. + client_secret> + Access permissions that rclone should use when requesting access from HiDrive. + Leave blank normally. + scope_access> + Edit advanced config? + y/n> n + Use auto config? + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + type = hidrive + token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"} + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +You should be aware that OAuth-tokens can be used to access your account +and hence should not be shared with other persons. See the below section +for more information. + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from HiDrive. This only runs from the moment it opens +your browser to the moment you get back the verification code. The +webserver runs on http://127.0.0.1:53682/. If local port 53682 is +protected by a firewall you may need to temporarily unblock the firewall +to complete authorization. + +Once configured you can then use rclone like this, + +List directories in top level of your HiDrive root folder + + rclone lsd remote: + +List all the files in your HiDrive filesystem + + rclone ls remote: + +To copy a local directory to a HiDrive directory called backup + + rclone copy /home/source remote:backup + +Keeping your tokens safe + +Any OAuth-tokens will be stored by rclone in the remote's configuration +file as unencrypted text. Anyone can use a valid refresh-token to access +your HiDrive filesystem without knowing your password. Therefore you +should make sure no one else can access your configuration. + +It is possible to encrypt rclone's configuration file. You can find +information on securing your configuration file by viewing the +configuration encryption docs. + +Invalid refresh token + +As can be verified here, each refresh_token (for Native Applications) is +valid for 60 days. If used to access HiDrivei, its validity will be +automatically extended. + +This means that if you + +- Don't use the HiDrive remote for 60 days + +then rclone will return an error which includes a text that implies the +refresh token is invalid or expired. + +To fix this you will need to authorize rclone to access your HiDrive +account again. + +Using + + rclone config reconnect remote: + +the process is very similar to the process of initial setup exemplified +before. + +Modified time and hashes + +HiDrive allows modification times to be set on objects accurate to 1 +second. + +HiDrive supports its own hash type which is used to verify the integrety +of file contents after successful transfers. + +Restricted filename characters + +HiDrive cannot store files or folders that include / (0x2F) or +null-bytes (0x00) in their name. Any other characters can be used in the +names of files or folders. Additionally, files or folders cannot be +named either of the following: . or .. + +Therefore rclone will automatically replace these characters, if files +or folders are stored or accessed with such names. + +You can read about how this filename encoding works in general here. + +Keep in mind that HiDrive only supports file or folder names with a +length of 255 characters or less. + +Transfers + +HiDrive limits file sizes per single request to a maximum of 2 GiB. To +allow storage of larger files and allow for better upload performance, +the hidrive backend will use a chunked transfer for files larger than 96 +MiB. Rclone will upload multiple parts/chunks of the file at the same +time. Chunks in the process of being uploaded are buffered in memory, so +you may want to restrict this behaviour on systems with limited +resources. + +You can customize this behaviour using the following options: + +- chunk_size: size of file parts +- upload_cutoff: files larger or equal to this in size will use a + chunked transfer +- upload_concurrency: number of file-parts to upload at the same time + +See the below section about configuration options for more details. + +Root folder + +You can set the root folder for rclone. This is the directory that +rclone considers to be the root of your HiDrive. + +Usually, you will leave this blank, and rclone will use the root of the +account. + +However, you can set this to restrict rclone to a specific folder +hierarchy. + +This works by prepending the contents of the root_prefix option to any +paths accessed by rclone. For example, the following two ways to access +the home directory are equivalent: + + rclone lsd --hidrive-root-prefix="/users/test/" remote:path + + rclone lsd remote:/users/test/path + +See the below section about configuration options for more details. + +Directory member count + +By default, rclone will know the number of directory members contained +in a directory. For example, rclone lsd uses this information. + +The acquisition of this information will result in additional time costs +for HiDrive's API. When dealing with large directory structures, it may +be desirable to circumvent this time cost, especially when this +information is not explicitly needed. For this, the +disable_fetching_member_count option can be used. + +See the below section about configuration options for more details. + +Standard options + +Here are the Standard options specific to hidrive (HiDrive). + +--hidrive-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_HIDRIVE_CLIENT_ID +- Type: string +- Required: false + +--hidrive-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_HIDRIVE_CLIENT_SECRET +- Type: string +- Required: false + +--hidrive-scope-access + +Access permissions that rclone should use when requesting access from +HiDrive. + +Properties: + +- Config: scope_access +- Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS +- Type: string +- Default: "rw" +- Examples: + - "rw" + - Read and write access to resources. + - "ro" + - Read-only access to resources. + +Advanced options + +Here are the Advanced options specific to hidrive (HiDrive). + +--hidrive-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_HIDRIVE_TOKEN +- Type: string +- Required: false + +--hidrive-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_HIDRIVE_AUTH_URL +- Type: string +- Required: false + +--hidrive-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_HIDRIVE_TOKEN_URL +- Type: string +- Required: false + +--hidrive-scope-role + +User-level that rclone should use when requesting access from HiDrive. + +Properties: + +- Config: scope_role +- Env Var: RCLONE_HIDRIVE_SCOPE_ROLE +- Type: string +- Default: "user" +- Examples: + - "user" + - User-level access to management permissions. + - This will be sufficient in most cases. + - "admin" + - Extensive access to management permissions. + - "owner" + - Full access to management permissions. + +--hidrive-root-prefix + +The root/parent folder for all paths. + +Fill in to use the specified folder as the parent for all paths given to +the remote. This way rclone can use any folder as its starting point. + +Properties: + +- Config: root_prefix +- Env Var: RCLONE_HIDRIVE_ROOT_PREFIX +- Type: string +- Default: "/" +- Examples: + - "/" + - The topmost directory accessible by rclone. + - This will be equivalent with "root" if rclone uses a regular + HiDrive user account. + - "root" + - The topmost directory of the HiDrive user account + - "" + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to + this remote with a valid parent e.g. "remote:/path/to/dir" + or "remote:root/path/to/dir". + +--hidrive-endpoint + +Endpoint for the service. + +This is the URL that API-calls will be made to. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_HIDRIVE_ENDPOINT +- Type: string +- Default: "https://api.hidrive.strato.com/2.1" + +--hidrive-disable-fetching-member-count + +Do not fetch number of objects in directories unless it is absolutely +necessary. + +Requests may be faster if the number of objects in subdirectories is not +fetched. + +Properties: + +- Config: disable_fetching_member_count +- Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT +- Type: bool +- Default: false + +--hidrive-chunk-size + +Chunksize for chunked uploads. + +Any files larger than the configured cutoff (or files of unknown size) +will be uploaded in chunks of this size. + +The upper limit for this is 2147483647 bytes (about 2.000Gi). That is +the maximum amount of bytes a single upload-operation will support. +Setting this above the upper limit or to a negative value will cause +uploads to fail. + +Setting this to larger values may increase the upload speed at the cost +of using more memory. It can be set to smaller values smaller to save on +memory. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_HIDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 48Mi + +--hidrive-upload-cutoff + +Cutoff/Threshold for chunked uploads. + +Any files larger than this will be uploaded in chunks of the configured +chunksize. + +The upper limit for this is 2147483647 bytes (about 2.000Gi). That is +the maximum amount of bytes a single upload-operation will support. +Setting this above the upper limit will cause uploads to fail. + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 96Mi + +--hidrive-upload-concurrency + +Concurrency for chunked uploads. + +This is the upper limit for how many transfers for the same file are +running concurrently. Setting this above to a value smaller than 1 will +cause uploads to deadlock. + +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. + +Properties: + +- Config: upload_concurrency +- Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY +- Type: int +- Default: 4 + +--hidrive-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_HIDRIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,Dot + +Limitations + +Symbolic links + +HiDrive is able to store symbolic links (symlinks) by design, for +example, when unpacked from a zip archive. + +There exists no direct mechanism to manage native symlinks in remotes. +As such this implementation has chosen to ignore any native symlinks +present in the remote. rclone will not be able to access or show any +symlinks stored in the hidrive-remote. This means symlinks cannot be +individually removed, copied, or moved, except when removing, copying, +or moving the parent folder. + +This does not affect the .rclonelink-files that rclone uses to encode +and store symbolic links. + +Sparse files + +It is possible to store sparse files in HiDrive. + +Note that copying a sparse file will expand the holes into null-byte +(0x00) regions that will then consume disk space. Likewise, when +downloading a sparse file, the resulting file will have null-byte +regions in the place of file holes. + HTTP The HTTP remote is a read only remote for reading files of a webserver. @@ -26750,7 +29479,7 @@ This will guide you through an interactive setup process: Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / http Connection + XX / HTTP \ "http" [snip] Storage> http @@ -26823,11 +29552,11 @@ or: Standard options -Here are the standard options specific to http (http Connection). +Here are the Standard options specific to http (HTTP). --http-url -URL of http host to connect to. +URL of HTTP host to connect to. E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password. @@ -26841,7 +29570,7 @@ Properties: Advanced options -Here are the advanced options specific to http (http Connection). +Here are the Advanced options specific to http (HTTP). --http-headers @@ -26918,7 +29647,7 @@ rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Hubic @@ -27023,7 +29752,7 @@ are the same. Standard options -Here are the standard options specific to hubic (Hubic). +Here are the Standard options specific to hubic (Hubic). --hubic-client-id @@ -27053,7 +29782,7 @@ Properties: Advanced options -Here are the advanced options specific to hubic (Hubic). +Here are the Advanced options specific to hubic (Hubic). --hubic-token @@ -27148,6 +29877,331 @@ The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. +Internet Archive + +The Internet Archive backend utilizes Items on archive.org + +Refer to IAS3 API documentation for the API this backend uses. + +Paths are specified as remote:bucket (or remote: for the lsd command.) +You may put subdirectories in too, e.g. remote:item/path/to/dir. + +Once you have made a remote (see the provider specific section above) +you can use it like this: + +Unlike S3, listing up all items uploaded by you isn't supported. + +Make a new item + + rclone mkdir remote:item + +List the contents of a item + + rclone ls remote:item + +Sync /home/local/directory to the remote item, deleting any excess files +in the item. + + rclone sync -i /home/local/directory remote:item + +Notes + +Because of Internet Archive's architecture, it enqueues write operations +(and extra post-processings) in a per-item queue. You can check item's +queue at https://catalogd.archive.org/history/item-name-here . Because +of that, all uploads/deletes will not show up immediately and takes some +time to be available. The per-item queue is enqueued to an another +queue, Item Deriver Queue. You can check the status of Item Deriver +Queue here. This queue has a limit, and it may block you from uploading, +or even deleting. You should avoid uploading a lot of small files for +better behavior. + +You can optionally wait for the server's processing to finish, by +setting non-zero value to wait_archive key. By making it wait, rclone +can do normal file comparison. Make sure to set a large enough value +(e.g. 30m0s for smaller files) as it can take a long time depending on +server's queue. + +About metadata + +This backend supports setting, updating and reading metadata of each +file. The metadata will appear as file metadata on Internet Archive. +However, some fields are reserved by both Internet Archive and rclone. + +The following are reserved by Internet Archive: - name - source - size - +md5 - crc32 - sha1 - format - old_version - viruscheck + +Trying to set values to these keys is ignored with a warning. Only +setting mtime is an exception. Doing so make it the identical behavior +as setting ModTime. + +rclone reserves all the keys starting with rclone-. Setting value for +these keys will give you warnings, but values are set according to +request. + +If there are multiple values for a key, only the first one is returned. +This is a limitation of rclone, that supports one value per one key. It +can be triggered when you did a server-side copy. + +Reading metadata will also provide custom (non-standard nor reserved) +ones. + +Configuration + +Here is an example of making an internetarchive configuration. Most +applies to the other providers as well, any differences are described +below. + +First run + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + XX / InternetArchive Items + \ (internetarchive) + Storage> internetarchive + Option access_key_id. + IAS3 Access Key. + Leave blank for anonymous access. + You can find one here: https://archive.org/account/s3.php + Enter a value. Press Enter to leave empty. + access_key_id> XXXX + Option secret_access_key. + IAS3 Secret Key (password). + Leave blank for anonymous access. + Enter a value. Press Enter to leave empty. + secret_access_key> XXXX + Edit advanced config? + y) Yes + n) No (default) + y/n> y + Option endpoint. + IAS3 Endpoint. + Leave blank for default value. + Enter a string value. Press Enter for the default (https://s3.us.archive.org). + endpoint> + Option front_endpoint. + Host of InternetArchive Frontend. + Leave blank for default value. + Enter a string value. Press Enter for the default (https://archive.org). + front_endpoint> + Option disable_checksum. + Don't store MD5 checksum with object metadata. + Normally rclone will calculate the MD5 checksum of the input before + uploading it so it can ask the server to check the object against checksum. + This is great for data integrity checking but can cause long delays for + large files to start uploading. + Enter a boolean value (true or false). Press Enter for the default (true). + disable_checksum> true + Option encoding. + The encoding for the backend. + See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. + Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). + encoding> + Edit advanced config? + y) Yes + n) No (default) + y/n> n + -------------------- + [remote] + type = internetarchive + access_key_id = XXXX + secret_access_key = XXXX + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +Standard options + +Here are the Standard options specific to internetarchive (Internet +Archive). + +--internetarchive-access-key-id + +IAS3 Access Key. + +Leave blank for anonymous access. You can find one here: +https://archive.org/account/s3.php + +Properties: + +- Config: access_key_id +- Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID +- Type: string +- Required: false + +--internetarchive-secret-access-key + +IAS3 Secret Key (password). + +Leave blank for anonymous access. + +Properties: + +- Config: secret_access_key +- Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY +- Type: string +- Required: false + +Advanced options + +Here are the Advanced options specific to internetarchive (Internet +Archive). + +--internetarchive-endpoint + +IAS3 Endpoint. + +Leave blank for default value. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT +- Type: string +- Default: "https://s3.us.archive.org" + +--internetarchive-front-endpoint + +Host of InternetArchive Frontend. + +Leave blank for default value. + +Properties: + +- Config: front_endpoint +- Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT +- Type: string +- Default: "https://archive.org" + +--internetarchive-disable-checksum + +Don't ask the server to test against MD5 checksum calculated by rclone. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against +checksum. This is great for data integrity checking but can cause long +delays for large files to start uploading. + +Properties: + +- Config: disable_checksum +- Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM +- Type: bool +- Default: true + +--internetarchive-wait-archive + +Timeout for waiting the server's processing tasks (specifically archive +and book_op) to finish. Only enable if you need to be guaranteed to be +reflected after write operations. 0 to disable waiting. No errors to be +thrown in case of timeout. + +Properties: + +- Config: wait_archive +- Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE +- Type: Duration +- Default: 0s + +--internetarchive-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_INTERNETARCHIVE_ENCODING +- Type: MultiEncoder +- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + +Metadata + +Metadata fields provided by Internet Archive. If there are multiple +values for a key, only the first one is returned. This is a limitation +of Rclone, that supports one value per one key. + +Owner is able to add custom keys. Metadata feature grabs all the keys +including them. + +Here are the possible system metadata items for the internetarchive +backend. + + ---------------------------------------------------------------------------------------------------------------------- + Name Help Type Example Read Only + --------------------- ------------------ ----------- -------------------------------------------- -------------------- + crc32 CRC32 calculated string 01234567 N + by Internet + Archive + + format Name of format string Comma-Separated Values N + identified by + Internet Archive + + md5 MD5 hash string 01234567012345670123456701234567 N + calculated by + Internet Archive + + mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N + modification, + managed by Rclone + + name Full file path, filename backend/internetarchive/internetarchive.go N + without the bucket + part + + old_version Whether the file boolean true N + was replaced and + moved by + keep-old-version + flag + + rclone-ia-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N + modification, + managed by + Internet Archive + + rclone-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N + modification, + managed by Rclone + + rclone-update-track Random value used string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N + by Rclone for + tracking changes + inside Internet + Archive + + sha1 SHA1 hash string 0123456701234567012345670123456701234567 N + calculated by + Internet Archive + + size File size in bytes decimal 123456 N + number + + source The source of the string original N + file + + viruscheck The last time unixtime 1654191352 N + viruscheck process + was run for the + file (?) + ---------------------------------------------------------------------------------------------------------------------- + +See the metadata docs for more info. + Jottacloud Jottacloud is a cloud storage service provider from a Norwegian company, @@ -27220,56 +30274,78 @@ This will guide you through an interactive setup process: q) Quit config n/s/q> n name> remote + Option Storage. Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value + Choose a number from below, or type in your own value. [snip] XX / Jottacloud - \ "jottacloud" + \ (jottacloud) [snip] Storage> jottacloud - ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - - Edit advanced config? (y/n) - y) Yes - n) No - y/n> n - Remote config - Use legacy authentication?. - This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + Edit advanced config? y) Yes n) No (default) y/n> n - - Generate a personal login token here: https://www.jottacloud.com/web/secure + Option config_type. + Select authentication type. + Choose a number from below, or type in an existing string value. + Press Enter for the default (standard). + / Standard authentication. + 1 | Use this if you're a normal Jottacloud user. + \ (standard) + / Legacy authentication. + 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + \ (legacy) + / Telia Cloud authentication. + 3 | Use this if you are using Telia Cloud. + \ (telia) + / Tele2 Cloud authentication. + 4 | Use this if you are using Tele2 Cloud. + \ (tele2) + config_type> 1 + Personal login token. + Generate here: https://www.jottacloud.com/web/secure Login Token> - - Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? - + Use a non-standard device/mountpoint? + Choosing no, the default, will let you access the storage used for the archive + section of the official Jottacloud client. If you instead want to access the + sync or the backup section, for example, you must choose yes. y) Yes - n) No + n) No (default) y/n> y - Please select the device to use. Normally this will be Jotta - Choose a number from below, or type in an existing value + Option config_device. + The device to use. In standard setup the built-in Jotta device is used, + which contains predefined mountpoints for archive, sync etc. All other devices + are treated as backup devices by the official Jottacloud client. You may create + a new by entering a unique name. + Choose a number from below, or type in your own string value. + Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta - Devices> 2 - Please select the mountpoint to user. Normally this will be Archive - Choose a number from below, or type in an existing value + config_device> 2 + Option config_mountpoint. + The mountpoint to use for the built-in device Jotta. + The standard setup is to use the Archive mountpoint. Most other mountpoints + have very limited support in rclone and should generally be avoided. + Choose a number from below, or type in an existing string value. + Press Enter for the default (Archive). 1 > Archive - 2 > Links + 2 > Shared 3 > Sync - - Mountpoints> 1 + config_mountpoint> 1 -------------------- - [jotta] + [remote] type = jottacloud + configVersion = 1 + client_id = jottacli + client_secret = + tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} + username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive - configVersion = 1 -------------------- - y) Yes this is OK + y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y @@ -27291,21 +30367,30 @@ To copy a local directory to an Jottacloud directory called backup Devices and Mountpoints The official Jottacloud client registers a device for each computer you -install it on, and then creates a mountpoint for each folder you select -for Backup. The web interface uses a special device called Jotta for the -Archive and Sync mountpoints. +install it on, and shows them in the backup section of the user +interface. For each folder you select for backup it will create a +mountpoint within this device. A built-in device called Jotta is +special, and contains mountpoints Archive, Sync and some others, used +for corresponding features in official clients. -With rclone you'll want to use the Jotta/Archive device/mountpoint in -most cases, however if you want to access files uploaded by any of the -official clients rclone provides the option to select other devices and -mountpoints during config. Note that uploading files is currently not -supported to other devices than Jotta. +With rclone you'll want to use the standard Jotta/Archive +device/mountpoint in most cases. However, you may for example want to +access files from the sync or backup functionality provided by the +official clients, and rclone therefore provides the option to select +other devices and mountpoints during config. -The built-in Jotta device may also contain several other mountpoints, -such as: Latest, Links, Shared and Trash. These are special mountpoints -with a different internal representation than the "regular" mountpoints. -Rclone will only to a very limited degree support them. Generally you -should avoid these, unless you know what you are doing. +You are allowed to create new devices and mountpoints. All devices +except the built-in Jotta device are treated as backup devices by +official Jottacloud clients, and the mountpoints on them are individual +backup sets. + +With the built-in Jotta device, only existing, built-in, mountpoints can +be selected. In addition to the mentioned Archive and Sync, it may +contain several other mountpoints such as: Latest, Links, Shared and +Trash. All of these are special mountpoints with a different internal +representation than the "regular" mountpoints. Rclone will only to a +very limited degree support them. Generally you should avoid these, +unless you know what you are doing. --fast-list @@ -27384,7 +30469,7 @@ current usage. Advanced options -Here are the advanced options specific to jottacloud (Jottacloud). +Here are the Advanced options specific to jottacloud (Jottacloud). --jottacloud-md5-memory-limit @@ -27584,7 +30669,7 @@ strings. Standard options -Here are the standard options specific to koofr (Koofr, Digi Storage and +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). --koofr-provider @@ -27674,7 +30759,7 @@ Properties: Advanced options -Here are the advanced options specific to koofr (Koofr, Digi Storage and +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). --koofr-mountid @@ -28024,7 +31109,7 @@ strings. Standard options -Here are the standard options specific to mailru (Mail.ru Cloud). +Here are the Standard options specific to mailru (Mail.ru Cloud). --mailru-user @@ -28078,7 +31163,7 @@ Properties: Advanced options -Here are the advanced options specific to mailru (Mail.ru Cloud). +Here are the Advanced options specific to mailru (Mail.ru Cloud). --mailru-speedup-file-patterns @@ -28315,6 +31400,38 @@ Use rclone dedupe to fix duplicated files. Failure to log-in +Object not found + +If you are connecting to your Mega remote for the first time, to test +access and syncronisation, you may receive an error such as + + Failed to create file system for "my-mega-remote:": + couldn't login: Object (typically, node or user) not found + +The diagnostic steps often recommended in the rclone forum start with +the MEGAcmd utility. Note that this refers to the official C++ command +from https://github.com/meganz/MEGAcmd and not the go language built +command from t3rm1n4l/megacmd that is no longer maintained. + +Follow the instructions for installing MEGAcmd and try accessing your +remote as they recommend. You can establish whether or not you can log +in using MEGAcmd, and obtain diagnostic information to help you, and +search or work with others in the forum. + + MEGA CMD> login me@example.com + Password: + Fetching nodes ... + Loading transfers from local cache + Login complete as me@example.com + me@example.com:/$ + +Note that some have found issues with passwords containing special +characters. If you can not log on with rclone, but MEGAcmd logs on just +fine, then consider changing your password temporarily to pure +alphanumeric characters, in case that helps. + +Repeated commands blocks access + Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. @@ -28361,7 +31478,7 @@ got the remote blocked for a while. Standard options -Here are the standard options specific to mega (Mega). +Here are the Standard options specific to mega (Mega). --mega-user @@ -28389,7 +31506,7 @@ Properties: Advanced options -Here are the advanced options specific to mega (Mega). +Here are the Advanced options specific to mega (Mega). --mega-debug @@ -28515,6 +31632,8 @@ See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config to walk you through the setup process. +Configuration + Here's an example of how to make a remote called ns1. 1. To begin the interactive configuration process, enter this command: @@ -28621,6 +31740,8 @@ You can't perform operations between different remotes. rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +Features + Symlink Support The Netstorage backend changes the rclone --links, -l behavior. When @@ -28669,7 +31790,7 @@ Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. -ListR Feature +--fast-list / ListR support NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects @@ -28696,7 +31817,7 @@ files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output. -Purge Feature +Purge NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default @@ -28712,7 +31833,7 @@ accessible. Standard options -Here are the standard options specific to netstorage (Akamai +Here are the Standard options specific to netstorage (Akamai NetStorage). --netstorage-host @@ -28757,7 +31878,7 @@ Properties: Advanced options -Here are the advanced options specific to netstorage (Akamai +Here are the Advanced options specific to netstorage (Akamai NetStorage). --netstorage-protocol @@ -28789,8 +31910,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -28966,7 +32087,7 @@ untrusted environment such as a CI build server. Standard options -Here are the standard options specific to azureblob (Microsoft Azure +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). --azureblob-account @@ -29067,7 +32188,7 @@ Properties: Advanced options -Here are the advanced options specific to azureblob (Microsoft Azure +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). --azureblob-msi-object-id @@ -29332,15 +32453,21 @@ backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Azure Storage Emulator Support -You can test rclone with storage emulator locally, to do this make sure -azure storage emulator installed locally and set up a new remote with -rclone config follow instructions described in introduction, set -use_emulator config as true, you do not need to provide default account -name or key if using emulator. +You can run rclone with storage emulator (usually azurite). + +To do this, just set up a new remote with rclone config following +instructions described in introduction and set use_emulator config as +true. You do not need to provide default account name neither an account +key. + +Also, if you want to access a storage emulator instance running on a +different machine, you can override Endpoint parameter in advanced +settings, setting it to http(s)://:/devstoreaccount1 (e.g. +http://10.254.2.5:10000/devstoreaccount1). Microsoft OneDrive @@ -29457,12 +32584,16 @@ To copy a local directory to an OneDrive directory called backup Getting your own Client ID and Key -You can use your own Client ID if the default (client_id left blank) one -doesn't work for you or you see lots of throttling. The default Client -ID and Key is shared by all rclone users when performing requests. +rclone uses a default Client ID when talking to OneDrive, unless a +custom client_id is specified in the config. The default Client ID and +Key are shared by all rclone users when performing requests. -If you are having problems with them (E.g., seeing a lot of throttling), -you can get your own Client ID and Key by following the steps below: +You may choose to create and use your own Client ID, in case the default +one does not work well for you. For example, you might see throtting. + +Creating Client ID for OneDrive Personal + +To create your own Client ID, please follow these steps: 1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade @@ -29480,17 +32611,47 @@ you can get your own Client ID and Key by following the steps below: select Microsoft Graph then select delegated permissions. 5. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, - offline_access, User.Read, and optionally Sites.Read.All (see - below). Once selected click Add permissions at the bottom. + offline_access, User.Read and Sites.Read.All (if custom access + scopes are configured, select the permissions accordingly). Once + selected click Add permissions at the bottom. Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. +The access_scopes option allows you to configure the permissions +requested by rclone. See Microsoft Docs for more information about the +different scopes. + The Sites.Read.All permission is required if you need to search SharePoint sites when configuring the remote. However, if that -permission is not assigned, you need to set disable_site_permission -option to true in the advanced options. +permission is not assigned, you need to exclude Sites.Read.All from your +access scopes or set disable_site_permission option to true in the +advanced options. + +Creating Client ID for OneDrive Business + +The steps for OneDrive Personal may or may not work for OneDrive +Business, depending on the security settings of the organization. A +common error is that the publisher of the App is not verified. + +You may try to verify you account, or try to limit the App to your +organization only, as shown below. + +1. Make sure to create the App with your business account. +2. Follow the steps above to create an App. However, we need a + different account type here: + Accounts in this organizational directory only (*** - Single tenant). + Note that you can also change the account type aftering creating the + App. +3. Find the tenant ID of your organization. +4. In the rclone config, set auth_url to + https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize. +5. In the rclone config, set token_url to + https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token. + +Note: If you have a special region, you may need a different host in +step 4 and 5. Here are some hints. Modification time and hashes @@ -29547,7 +32708,7 @@ the OneDrive website. Standard options -Here are the standard options specific to onedrive (Microsoft OneDrive). +Here are the Standard options specific to onedrive (Microsoft OneDrive). --onedrive-client-id @@ -29597,7 +32758,7 @@ Properties: Advanced options -Here are the advanced options specific to onedrive (Microsoft OneDrive). +Here are the Advanced options specific to onedrive (Microsoft OneDrive). --onedrive-token @@ -29691,6 +32852,32 @@ Properties: - Type: string - Required: false +--onedrive-access-scopes + +Set scopes to be requested by rclone. + +Choose or manually enter a custom space separated list with all scopes, +that rclone should request. + +Properties: + +- Config: access_scopes +- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES +- Type: SpaceSepList +- Default: Files.Read Files.ReadWrite Files.Read.All + Files.ReadWrite.All Sites.Read.All offline_access +- Examples: + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All + Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All + offline_access" + - Read and write access to all resources, without the ability + to browse SharePoint sites. + - Same as if disable_site_permission was set to true + --onedrive-disable-site-permission Disable the request for Sites.Read.All permission. @@ -29994,12 +33181,12 @@ Replacing/deleting existing files on Sharepoint gets "item not found" It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files -(.docx, .xlsx, etc.). As a workaround, you may use the ---backup-dir command line argument so rclone moves the -files to be replaced/deleted into a given backup directory (instead of -directly replacing/deleting them). For example, to instruct rclone to -move the files into the directory rclone-backup-dir on backend -mysharepoint, you may use: +(.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a +workaround, you may use the --backup-dir command line +argument so rclone moves the files to be replaced/deleted into a given +backup directory (instead of directly replacing/deleting them). For +example, to instruct rclone to move the files into the directory +rclone-backup-dir on backend mysharepoint, you may use: --backup-dir mysharepoint:rclone-backup-dir @@ -30014,7 +33201,7 @@ account. You can't do much about it, maybe write an email to your admins. However, there are other ways to interact with your OneDrive account. -Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint +Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint invalid_grant (AADSTS50076) @@ -30135,7 +33322,7 @@ strings. Standard options -Here are the standard options specific to opendrive (OpenDrive). +Here are the Standard options specific to opendrive (OpenDrive). --opendrive-username @@ -30163,7 +33350,7 @@ Properties: Advanced options -Here are the advanced options specific to opendrive (OpenDrive). +Here are the Advanced options specific to opendrive (OpenDrive). --opendrive-encoding @@ -30208,7 +33395,7 @@ rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about QingStor @@ -30348,7 +33535,7 @@ strings. Standard options -Here are the standard options specific to qingstor (QingCloud Object +Here are the Standard options specific to qingstor (QingCloud Object Storage). --qingstor-env-auth @@ -30434,7 +33621,7 @@ Properties: Advanced options -Here are the advanced options specific to qingstor (QingCloud Object +Here are the Advanced options specific to qingstor (QingCloud Object Storage). --qingstor-connection-retries @@ -30522,7 +33709,7 @@ rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Sia @@ -30641,7 +33828,7 @@ Once configured, you can then use rclone like this: Standard options -Here are the standard options specific to sia (Sia Decentralized Cloud). +Here are the Standard options specific to sia (Sia Decentralized Cloud). --sia-api-url @@ -30676,7 +33863,7 @@ Properties: Advanced options -Here are the advanced options specific to sia (Sia Decentralized Cloud). +Here are the Advanced options specific to sia (Sia Decentralized Cloud). --sia-user-agent @@ -30948,7 +34135,7 @@ strings. Standard options -Here are the standard options specific to swift (OpenStack Swift +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). --swift-env-auth @@ -31193,7 +34380,7 @@ Properties: Advanced options -Here are the advanced options specific to swift (OpenStack Swift +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). --swift-leave-parts-on-error @@ -31410,6 +34597,14 @@ Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash. +Emptying the trash + +Due to an API limitation, the rclone cleanup command will only work if +you set your username and password in the advanced options for this +backend. Since we generally want to avoid storing user passwords in the +rclone config file, we advise you to only set this up if you need the +rclone cleanup command to work. + Root folder ID You can set the root_folder_id for rclone. This is the directory @@ -31433,7 +34628,7 @@ config. Standard options -Here are the standard options specific to pcloud (Pcloud). +Here are the Standard options specific to pcloud (Pcloud). --pcloud-client-id @@ -31463,7 +34658,7 @@ Properties: Advanced options -Here are the advanced options specific to pcloud (Pcloud). +Here are the Advanced options specific to pcloud (Pcloud). --pcloud-token @@ -31546,6 +34741,35 @@ Properties: - "eapi.pcloud.com" - EU region +--pcloud-username + +Your pcloud username. + +This is only required when you want to use the cleanup command. Due to a +bug in the pcloud API the required API does not support OAuth +authentication so we have to rely on user password authentication for +it. + +Properties: + +- Config: username +- Env Var: RCLONE_PCLOUD_USERNAME +- Type: string +- Required: false + +--pcloud-password + +Your pcloud password. + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: password +- Env Var: RCLONE_PCLOUD_PASSWORD +- Type: string +- Required: false + premiumize.me Paths are specified as remote:path @@ -31645,7 +34869,7 @@ strings. Standard options -Here are the standard options specific to premiumizeme (premiumize.me). +Here are the Standard options specific to premiumizeme (premiumize.me). --premiumizeme-api-key @@ -31662,7 +34886,7 @@ Properties: Advanced options -Here are the advanced options specific to premiumizeme (premiumize.me). +Here are the Advanced options specific to premiumizeme (premiumize.me). --premiumizeme-encoding @@ -31792,7 +35016,7 @@ strings. Advanced options -Here are the advanced options specific to putio (Put.io). +Here are the Advanced options specific to putio (Put.io). --putio-encoding @@ -31807,6 +35031,15 @@ Properties: - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +Limitations + +put.io has rate limiting. When you hit a limit, rclone automatically +retries after waiting the amount of time requested by the server. + +If you want to avoid ever hitting these limits, you may use the +--tpslimit flag with a low number. Note that the imposed limits may be +different for different operations, and may change over time. + Seafile This is a backend for the Seafile storage service: - It works with both @@ -32065,7 +35298,7 @@ haven't been tested and might not work properly. Standard options -Here are the standard options specific to seafile (seafile). +Here are the Standard options specific to seafile (seafile). --seafile-url @@ -32157,7 +35390,7 @@ Properties: Advanced options -Here are the advanced options specific to seafile (seafile). +Here are the Advanced options specific to seafile (seafile). --seafile-create-library @@ -32189,7 +35422,7 @@ SFTP is the Secure (or SSH) File Transfer Protocol. The SFTP backend can be used with a number of different providers: -- C14 +- Hetzner Storage Box - rsync.net SFTP runs over SSH v2 and is installed as standard with most modern SSH @@ -32203,8 +35436,11 @@ config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /) Note that some SFTP servers will need the leading / - Synology is a good -example of this. rsync.net, on the other hand, requires users to OMIT -the leading /. +example of this. rsync.net and Hetzner, on the other hand, requires +users to OMIT the leading /. + +Note that by default rclone will try to execute shell commands on the +server, see shell access considerations. Configuration @@ -32223,7 +35459,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / SSH/SFTP Connection + XX / SSH/SFTP \ "sftp" [snip] Storage> sftp @@ -32417,6 +35653,114 @@ And then at the end of the session These commands can be used in scripts of course. +Shell access + +Some functionality of the SFTP backend relies on remote shell access, +and the possibility to execute commands. This includes checksum, and in +some cases also about. The shell commands that must be executed may be +different on different type of shells, and also quoting/escaping of file +path arguments containing special characters may be different. Rclone +therefore needs to know what type of shell it is, and if shell access is +available at all. + +Most servers run on some version of Unix, and then a basic Unix shell +can be assumed, without further distinction. Windows 10, Server 2019, +and later can also run a SSH server, which is a port of OpenSSH (see +official installation guide). On a Windows server the shell handling is +different: Although it can also be set up to use a Unix type shell, e.g. +Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and +PowerShell is a recommended alternative. All of these have bahave +differently, which rclone must handle. + +Rclone tries to auto-detect what type of shell is used on the server, +first time you access the SFTP remote. If a remote shell session is +successfully created, it will look for indications that it is CMD or +PowerShell, with fall-back to Unix if not something else is detected. If +unable to even create a remote shell session, then shell command +execution will be disabled entirely. The result is stored in the SFTP +remote configuration, in option shell_type, so that the auto-detection +only have to be performed once. If you manually set a value for this +option before first run, the auto-detection will be skipped, and if you +set a different value later this will override any existing. Value none +can be set to avoid any attempts at executing shell commands, e.g. if +this is not allowed on the server. + +When the server is rclone serve sftp, the rclone SFTP remote will detect +this as a Unix type shell - even if it is running on Windows. This +server does not actually have a shell, but it accepts input commands +matching the specific ones that the SFTP backend relies on for Unix +shells, e.g. md5sum and df. Also it handles the string escape rules used +for Unix shell. Treating it as a Unix type shell from a SFTP remote will +therefore always be correct, and support all features. + +Shell access considerations + +The shell type auto-detection logic, described above, means that by +default rclone will try to run a shell command the first time a new sftp +remote is accessed. If you configure a sftp remote without a config +file, e.g. an on the fly remote, rclone will have nowhere to store the +result, and it will re-run the command on every access. To avoid this +you should explicitely set the shell_type option to the correct value, +or to none if you want to prevent rclone from executing any remote shell +commands. + +It is also important to note that, since the shell type decides how +quoting and escaping of file paths used as command-line arguments are +performed, configuring the wrong shell type may leave you exposed to +command injection exploits. Make sure to confirm the auto-detected shell +type, or explicitely set the shell type you know is correct, or disable +shell access until you know. + +Checksum + +SFTP does not natively support checksums (file hash), but rclone is able +to use checksumming if the same login has shell access, and can execute +remote commands. If there is a command that can calculate compatible +checksums on the remote system, Rclone can then be configured to execute +this whenever a checksum is needed, and read back the results. Currently +MD5 and SHA-1 are supported. + +Normally this requires an external utility being available on the +server. By default rclone will try commands md5sum, md5 and +rclone md5sum for MD5 checksums, and the first one found usable will be +picked. Same with sha1sum, sha1 and rclone sha1sum commands for SHA-1 +checksums. These utilities normally need to be in the remote's PATH to +be found. + +In some cases the shell itself is capable of calculating checksums. +PowerShell is an example of such a shell. If rclone detects that the +remote shell is PowerShell, which means it most probably is a Windows +OpenSSH server, rclone will use a predefined script block to produce the +checksums when no external checksum commands are found (see shell +access). This assumes PowerShell version 4.0 or newer. + +The options md5sum_command and sha1_command can be used to customize the +command to be executed for calculation of checksums. You can for example +set a specific path to where md5sum and sha1sum executables are located, +or use them to specify some other tools that print checksums in +compatible format. The value can include command-line arguments, or even +shell script blocks as with PowerShell. Rclone has subcommands md5sum +and sha1sum that use compatible format, which means if you have an +rclone executable on the server it can be used. As mentioned above, they +will be automatically picked up if found in PATH, but if not you can set +something like /path/to/rclone md5sum as the value of option +md5sum_command to make sure a specific executable is used. + +Remote checksumming is recommended and enabled by default. First time +rclone is using a SFTP remote, if options md5sum_command or sha1_command +are not set, it will check if any of the default commands for each of +them, as described above, can be used. The result will be saved in the +remote configuration, so next time it will use the same. Value none will +be set if none of the default commands could be used for a specific +algorithm, and this algorithm will not be supported by the remote. + +Disabling the checksumming may be required if you are connecting to SFTP +servers which are not under your control, and to which the execution of +remote shell commands is prohibited. Set the configuration option +disable_hashcheck to true to disable checksumming entirely, or set +shell_type to none to disable all functionality based on remote shell +command execution. + Modified time Modified times are stored on the server to 1 second precision. @@ -32429,9 +35773,24 @@ mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour. +About command + +The about command returns the total space, free space, and used space on +the remote for the disk of the specified path on the remote or, if not +set, the disk of the root on the remote. + +SFTP usually supports the about command, but it depends on the server. +If the server implements the vendor-specific VFS statistics extension, +which is normally the case with OpenSSH instances, it will be used. If +not, but the same login has access to a Unix shell, where the df command +is available (e.g. in the remote's PATH), then this will be used +instead. If the server shell is PowerShell, probably with a Windows +OpenSSH server, rclone will use a built-in shell command (see shell +access). If none of the above is applicable, about will fail. + Standard options -Here are the standard options specific to sftp (SSH/SFTP Connection). +Here are the Standard options specific to sftp (SSH/SFTP). --sftp-host @@ -32607,7 +35966,7 @@ Properties: Advanced options -Here are the advanced options specific to sftp (SSH/SFTP Connection). +Here are the Advanced options specific to sftp (SSH/SFTP). --sftp-known-hosts-file @@ -32644,16 +36003,16 @@ Properties: --sftp-path-override -Override path used by SSH connection. +Override path used by SSH shell commands. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. -Shared folders can be found in directories representing volumes +E.g. if shared folders can be found in directories representing volumes: rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory -Home directory can be found in a shared folder called "home" +E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory @@ -32675,6 +36034,28 @@ Properties: - Type: bool - Default: true +--sftp-shell-type + +The type of SSH shell on remote server, if any. + +Leave blank for autodetect. + +Properties: + +- Config: shell_type +- Env Var: RCLONE_SFTP_SHELL_TYPE +- Type: string +- Required: false +- Examples: + - "none" + - No shell access + - "unix" + - Unix shell + - "powershell" + - PowerShell + - "cmd" + - Windows Command Prompt + --sftp-md5sum-command The command used to read md5 hashes. @@ -32812,25 +36193,75 @@ Properties: - Type: Duration - Default: 1m0s +--sftp-chunk-size + +Upload and download chunk size. + +This controls the maximum packet size used in the SFTP protocol. The RFC +limits this to 32768 bytes (32k), however a lot of servers support +larger sizes and setting it larger will increase transfer speed +dramatically on high latency links. + +Only use a setting higher than 32k if you always connect to the same +server or after sufficiently broad testing. + +For example using the value of 252k with OpenSSH works well with its +maximum packet size of 256k. + +If you get the error "failed to send packet header: EOF" when copying a +large file, try lowering this number. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_SFTP_CHUNK_SIZE +- Type: SizeSuffix +- Default: 32Ki + +--sftp-concurrency + +The maximum number of outstanding requests for one file + +This controls the maximum number of outstanding requests for one file. +Increasing it will increase throughput on high latency links at the cost +of using more memory. + +Properties: + +- Config: concurrency +- Env Var: RCLONE_SFTP_CONCURRENCY +- Type: int +- Default: 64 + +--sftp-set-env + +Environment variables to pass to sftp and commands + +Set environment variables in the form: + + VAR=value + +to be passed to the sftp client and to any commands run (eg md5sum). + +Pass multiple variables space separated, eg + + VAR1=value VAR2=value + +and pass variables with spaces in in quotes, eg + + "VAR3=value with space" "VAR4=value with space" VAR5=nospacehere + +Properties: + +- Config: set_env +- Env Var: RCLONE_SFTP_SET_ENV +- Type: SpaceSepList +- Default: + Limitations -SFTP supports checksums if the same login has shell access and md5sum or -sha1sum as well as echo are in the remote's PATH. This remote -checksumming (file hashing) is recommended and enabled by default. -Disabling the checksumming may be required if you are connecting to SFTP -servers which are not under your control, and to which the execution of -remote commands is prohibited. Set the configuration option -disable_hashcheck to true to disable checksumming. - -SFTP also supports about if the same login has shell access and df are -in the remote's PATH. about will return the total space, free space, and -used space on the remote for the disk of the specified path on the -remote or, if not set, the disk of the root on the remote. about will -fail if it does not have shell access or if df is not in the remote's -PATH. - -Note that some SFTP servers (e.g. Synology) the paths are different for -SSH and SFTP so the hashes can't be calculated properly. For them using +On some SFTP servers (e.g. Synology) the paths are different for SSH and +SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea. The only ssh agent supported under Windows is Putty's pageant. @@ -32844,22 +36275,22 @@ found in this paper. SFTP isn't supported under plan9 until this issue is fixed. Note that since SFTP isn't HTTP based the following flags don't work -with it: --dump-headers, --dump-bodies, --dump-auth +with it: --dump-headers, --dump-bodies, --dump-auth. Note that --timeout and --contimeout are both supported. -C14 - -C14 is supported through the SFTP backend. - -See C14's documentation - rsync.net rsync.net is supported through the SFTP backend. See rsync.net's documentation of rclone examples. +Hetzner Storage Box + +Hetzner Storage Boxes are supported through the SFTP backend on port 23. + +See Hetzner's documentation for details + Storj Storj is an encrypted, secure, and cost-effective object storage service @@ -33065,7 +36496,7 @@ Setup with API key and passphrase Standard options -Here are the standard options specific to storj (Storj Decentralized +Here are the Standard options specific to storj (Storj Decentralized Cloud Storage). --storj-provider @@ -33269,7 +36700,7 @@ without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Known issues @@ -33404,7 +36835,7 @@ deleted straight away. Standard options -Here are the standard options specific to sugarsync (Sugarsync). +Here are the Standard options specific to sugarsync (Sugarsync). --sugarsync-app-id @@ -33459,7 +36890,7 @@ Properties: Advanced options -Here are the advanced options specific to sugarsync (Sugarsync). +Here are the Advanced options specific to sugarsync (Sugarsync). --sugarsync-refresh-token @@ -33558,7 +36989,7 @@ rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about See rclone about +See List of backends that do not support rclone about and rclone about Tardigrade @@ -33663,7 +37094,7 @@ strings. Standard options -Here are the standard options specific to uptobox (Uptobox). +Here are the Standard options specific to uptobox (Uptobox). --uptobox-access-token @@ -33680,7 +37111,7 @@ Properties: Advanced options -Here are the advanced options specific to uptobox (Uptobox). +Here are the Advanced options specific to uptobox (Uptobox). --uptobox-encoding @@ -33950,7 +37381,7 @@ much larger latency of remote file systems. Standard options -Here are the standard options specific to union (Union merges the +Here are the Standard options specific to union (Union merges the contents of several upstream fs). --union-upstreams @@ -34013,6 +37444,31 @@ Properties: - Type: int - Default: 120 +Advanced options + +Here are the Advanced options specific to union (Union merges the +contents of several upstream fs). + +--union-min-free-space + +Minimum viable free space for lfs/eplfs policies. + +If a remote has less than this much free space then it won't be +considered for use in lfs or eplfs policies. + +Properties: + +- Config: min_free_space +- Env Var: RCLONE_UNION_MIN_FREE_SPACE +- Type: SizeSuffix +- Default: 1Gi + +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. + WebDAV Paths are specified as remote:path @@ -34040,7 +37496,7 @@ This will guide you through an interactive setup process: Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / Webdav + XX / WebDAV \ "webdav" [snip] Storage> webdav @@ -34049,7 +37505,7 @@ This will guide you through an interactive setup process: 1 / Connect to example.com \ "https://example.com" url> https://example.com/remote.php/webdav/ - Name of the Webdav site/service/software you are using + Name of the WebDAV site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \ "nextcloud" @@ -34116,7 +37572,7 @@ objects, or only on objects which had a hash uploaded with them. Standard options -Here are the standard options specific to webdav (Webdav). +Here are the Standard options specific to webdav (WebDAV). --webdav-url @@ -34133,7 +37589,7 @@ Properties: --webdav-vendor -Name of the Webdav site/service/software you are using. +Name of the WebDAV site/service/software you are using. Properties: @@ -34194,7 +37650,7 @@ Properties: Advanced options -Here are the advanced options specific to webdav (Webdav). +Here are the Advanced options specific to webdav (WebDAV). --webdav-bearer-token-command @@ -34521,7 +37977,7 @@ strings. Standard options -Here are the standard options specific to yandex (Yandex Disk). +Here are the Standard options specific to yandex (Yandex Disk). --yandex-client-id @@ -34551,7 +38007,7 @@ Properties: Advanced options -Here are the advanced options specific to yandex (Yandex Disk). +Here are the Advanced options specific to yandex (Yandex Disk). --yandex-token @@ -34753,7 +38209,7 @@ removed from filenames during upload. Standard options -Here are the standard options specific to zoho (Zoho). +Here are the Standard options specific to zoho (Zoho). --zoho-client-id @@ -34801,12 +38257,16 @@ Properties: - Europe - "in" - India + - "jp" + - Japan + - "com.cn" + - China - "com.au" - Australia Advanced options -Here are the advanced options specific to zoho (Zoho). +Here are the Advanced options specific to zoho (Zoho). --zoho-token @@ -34858,6 +38318,22 @@ Properties: - Type: MultiEncoder - Default: Del,Ctl,InvalidUtf8 +Setting up your own client_id + +For Zoho we advise you to set up your own client_id. To do so you have +to complete the following steps. + +1. Log in to the Zoho API Console + +2. Create a new client of type "Server-based Application". The name and + website don't matter, but you must add the redirect URL + http://localhost:53682/. + +3. Once the client is created, you can go to the settings tab and + enable it in other regions. + +The client id and client secret can now be used with rclone. + Local Filesystem Local paths are specified as normal filesystem paths, e.g. @@ -35156,7 +38632,7 @@ it isn't supported (e.g. Windows) it will be ignored. Advanced options -Here are the advanced options specific to local (Local Disk). +Here are the Advanced options specific to local (Local Disk). --local-nounc @@ -35166,8 +38642,8 @@ Properties: - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC -- Type: string -- Required: false +- Type: bool +- Default: false - Examples: - "true" - Disables long file names. @@ -35390,6 +38866,47 @@ Properties: - Type: MultiEncoder - Default: Slash,Dot +Metadata + +Depending on which OS is in use the local backend may return only some +of the system metadata. Setting system metadata is supported on all OSes +but setting user metadata is only supported on linux, freebsd, netbsd, +macOS and Solaris. It is not supported on Windows yet (see +pkg/attrs#47). + +User metadata is stored as extended attributes (which may not be +supported by all file systems) under the "user.*" prefix. + +Here are the possible system metadata items for the local backend. + + --------------------------------------------------------------------------------------------------- + Name Help Type Example Read Only + ----------- -------------- ------------- ------------------------------------- -------------------- + atime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z07:00 N + access + + btime Time of file RFC 3339 2006-01-02T15:04:05.999999999Z07:00 N + birth + (creation) + + gid Group ID of decimal 500 N + owner number + + mode File type and octal, unix 0100664 N + mode style + + mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z07:00 N + modification + + rdev Device ID (if hexadecimal 1abc N + special file) + + uid User ID of decimal 500 N + owner number + --------------------------------------------------------------------------------------------------- + +See the metadata docs for more info. + Backend commands Here are the commands specific to the local backend. @@ -35400,8 +38917,8 @@ Run them with The help below will explain what arguments each command takes. -See the "rclone backend" command for more info on how to pass options -and arguments. +See the backend command for more info on how to pass options and +arguments. These can be run on a running backend using the rc command backend/command. @@ -35422,6 +38939,276 @@ Options: Changelog +v1.59.0 - 2022-07-09 + +See commits + +- New backends + - Combine multiple remotes in one directory tree (Nick Craig-Wood) + - Hidrive (Ovidiu Victor Tatar) + - Internet Archive (Lesmiscore (Naoya Ozaki)) + - New S3 providers + - ArvanCloud AOS (ehsantdy) + - Cloudflare R2 (Nick Craig-Wood) + - Huawei OBS (m00594701) + - IDrive e2 (vyloy) +- New commands + - test makefile: Create a single file for testing (Nick + Craig-Wood) +- New Features + - Metadata framework to read and write system and user metadata on + backends (Nick Craig-Wood) + - Implemented initially for local, s3 and internetarchive + backends + - --metadata/-M flag to control whether metadata is copied + - --metadata-set flag to specify metadata for uploads + - Thanks to Manz Solutions for sponsoring this work. + - build + - Update to go1.18 and make go1.16 the minimum required + version (Nick Craig-Wood) + - Update android go build to 1.18.x and NDK to 23.1.7779620 + (Nick Craig-Wood) + - All windows binaries now no longer CGO (Nick Craig-Wood) + - Add linux/arm/v6 to docker images (Nick Craig-Wood) + - A huge number of fixes found with staticcheck (albertony) + - Configurable version suffix independent of version number + (albertony) + - check: Implement --no-traverse and --no-unicode-normalization + (Nick Craig-Wood) + - config: Readability improvements (albertony) + - copyurl: Add --header-filename to honor the HTTP header filename + directive (J-P Treen) + - filter: Allow multiple --exclude-if-present flags (albertony) + - fshttp: Add --disable-http-keep-alives to disable HTTP Keep + Alives (Nick Craig-Wood) + - install.sh + - Set the modes on the files and/or directories on macOS + (Michael C Tiernan - MIT-Research Computing Project) + - Pre verify sudo authorization -v before calling curl. + (Michael C Tiernan - MIT-Research Computing Project) + - lib/encoder: Add Semicolon encoding (Nick Craig-Wood) + - lsf: Add metadata support with M flag (Nick Craig-Wood) + - lsjson: Add --metadata/-M flag (Nick Craig-Wood) + - ncdu + - Implement multi selection (CrossR) + - Replace termbox with tcell's termbox wrapper (eNV25) + - Display correct path in delete confirmation dialog (Roberto + Ricci) + - operations + - Speed up hash checking by aborting the other hash if first + returns nothing (Nick Craig-Wood) + - Use correct src/dst in some log messages (zzr93) + - rcat: Check checksums by default like copy does (Nick + Craig-Wood) + - selfupdate: Replace deprecated x/crypto/openpgp package with + ProtonMail/go-crypto (albertony) + - serve ftp: Check --passive-port arguments are correct (Nick + Craig-Wood) + - size: Warn about inaccurate results when objects with unknown + size (albertony) + - sync: Overlap check is now filter-sensitive so --backup-dir can + be in the root provided it is filtered (Nick) + - test info: Check file name lengths using 1,2,3,4 byte unicode + characters (Nick Craig-Wood) + - test makefile(s): --sparse, --zero, --pattern, --ascii, + --chargen flags to control file contents (Nick Craig-Wood) + - Make sure we call the Shutdown method on backends (Martin + Czygan) +- Bug Fixes + - accounting: Fix unknown length file transfers counting 3 + transfers each (buda) + - ncdu: Fix issue where dir size is summed when file sizes are -1 + (albertony) + - sync/copy/move + - Fix --fast-list --create-empty-src-dirs and --exclude (Nick + Craig-Wood) + - Fix --max-duration and --cutoff-mode soft (Nick Craig-Wood) + - Fix fs cache unpin (Martin Czygan) + - Set proper exit code for errors that are not low-level retried + (e.g. size/timestamp changing) (albertony) +- Mount + - Support windows/arm64 (may still be problems - see #5828) (Nick + Craig-Wood) + - Log IO errors at ERROR level (Nick Craig-Wood) + - Ignore _netdev mount argument (Hugal31) +- VFS + - Add --vfs-fast-fingerprint for less accurate but faster + fingerprints (Nick Craig-Wood) + - Add --vfs-disk-space-total-size option to manually set the total + disk space (Claudio Maradonna) + - vfscache: Fix fatal error: sync: unlock of unlocked mutex error + (Nick Craig-Wood) +- Local + - Fix parsing of --local-nounc flag (Nick Craig-Wood) + - Add Metadata support (Nick Craig-Wood) +- Crypt + - Support metadata (Nick Craig-Wood) +- Azure Blob + - Calculate Chunksize/blocksize to stay below maxUploadParts + (Leroy van Logchem) + - Use chunksize lib to determine chunksize dynamically (Derek + Battams) + - Case insensitive access tier (Rob Pickerill) + - Allow remote emulator (azurite) (Lorenzo Maiorfi) +- B2 + - Add --b2-version-at flag to show file versions at time specified + (SwazRGB) + - Use chunksize lib to determine chunksize dynamically (Derek + Battams) +- Chunker + - Mark as not supporting metadata (Nick Craig-Wood) +- Compress + - Support metadata (Nick Craig-Wood) +- Drive + - Make backend config -o config add a combined AllDrives: remote + (Nick Craig-Wood) + - Make --drive-shared-with-me work with shared drives (Nick + Craig-Wood) + - Add --drive-resource-key for accessing link-shared files (Nick + Craig-Wood) + - Add backend commands exportformats and importformats for + debugging (Nick Craig-Wood) + - Fix 404 errors on copy/server side copy objects from public + folder (Nick Craig-Wood) + - Update Internal OAuth consent screen docs (Phil Shackleton) + - Moved root_folder_id to advanced section (Abhiraj) +- Dropbox + - Migrate from deprecated api (m8rge) + - Add logs to show when poll interval limits are exceeded (Nick + Craig-Wood) + - Fix nil pointer exception on dropbox impersonate user not found + (Nick Craig-Wood) +- Fichier + - Parse api error codes and them accordingly (buengese) +- FTP + - Add support for disable_utf8 option (Jason Zheng) + - Revert to upstream github.com/jlaffaye/ftp from our fork (Nick + Craig-Wood) +- Google Cloud Storage + - Add --gcs-no-check-bucket to minimise transactions and perms + (Nick Gooding) + - Add --gcs-decompress flag to decompress gzip-encoded files (Nick + Craig-Wood) + - by default these will be downloaded compressed (which + previously failed) +- Hasher + - Support metadata (Nick Craig-Wood) +- HTTP + - Fix missing response when using custom auth handler (albertony) +- Jottacloud + - Add support for upload to custom device and mountpoint + (albertony) + - Always store username in config and use it to avoid initial API + request (albertony) + - Fix issue with server-side copy when destination is in trash + (albertony) + - Fix listing output of remote with special characters (albertony) +- Mailru + - Fix timeout by using int instead of time.Duration for keeping + number of seconds (albertony) +- Mega + - Document using MEGAcmd to help with login failures (Art M. + Gallagher) +- Onedrive + - Implement --poll-interval for onedrive (Hugo Laloge) + - Add access scopes option (Sven Gerber) +- Opendrive + - Resolve lag and truncate bugs (Scott Grimes) +- Pcloud + - Fix about with no free space left (buengese) + - Fix cleanup (buengese) +- S3 + - Use PUT Object instead of presigned URLs to upload single part + objects (Nick Craig-Wood) + - Backend restore command to skip non-GLACIER objects (Vincent + Murphy) + - Use chunksize lib to determine chunksize dynamically (Derek + Battams) + - Retry RequestTimeout errors (Nick Craig-Wood) + - Implement reading and writing of metadata (Nick Craig-Wood) +- SFTP + - Add support for about and hashsum on windows server (albertony) + - Use vendor-specific VFS statistics extension for about if + available (albertony) + - Add --sftp-chunk-size to control packets sizes for high latency + links (Nick Craig-Wood) + - Add --sftp-concurrency to improve high latency transfers (Nick + Craig-Wood) + - Add --sftp-set-env option to set environment variables (Nick + Craig-Wood) + - Add Hetzner Storage Boxes to supported sftp backends (Anthrazz) +- Storj + - Fix put which lead to the file being unreadable when using mount + (Erik van Velzen) +- Union + - Add min_free_space option for lfs/eplfs policies (Nick + Craig-Wood) + - Fix uploading files to union of all bucket based remotes (Nick + Craig-Wood) + - Fix get free space for remotes which don't support it (Nick + Craig-Wood) + - Fix eplus policy to select correct entry for existing files + (Nick Craig-Wood) + - Support metadata (Nick Craig-Wood) +- Uptobox + - Fix root path handling (buengese) +- WebDAV + - Add SharePoint in other specific regions support (Noah Hsu) +- Yandex + - Handle api error on server-side move (albertony) +- Zoho + - Add Japan and China regions (buengese) + +v1.58.1 - 2022-04-29 + +See commits + +- Bug Fixes + - build: Update github.com/billziss-gh to github.com/winfsp (Nick + Craig-Wood) + - filter: Fix timezone of --min-age/-max-age from UTC to local as + documented (Nick Craig-Wood) + - rc/js: Correct RC method names (Sơn Trần-Nguyễn) + - docs + - Fix some links to command pages (albertony) + - Add --multi-thread-streams note to --transfers. (Zsolt Ero) +- Mount + - Fix --devname and fusermount: unknown option 'fsname' when + mounting via rc (Nick Craig-Wood) +- VFS + - Remove wording which suggests VFS is only for mounting (Nick + Craig-Wood) +- Dropbox + - Fix retries of multipart uploads with incorrect_offset error + (Nick Craig-Wood) +- Google Cloud Storage + - Use the s3 pacer to speed up transactions (Nick Craig-Wood) + - pacer: Default the Google pacer to a burst of 100 to fix gcs + pacing (Nick Craig-Wood) +- Jottacloud + - Fix scope in token request (albertony) +- Netstorage + - Fix unescaped HTML in documentation (Nick Craig-Wood) + - Make levels of headings consistent (Nick Craig-Wood) + - Add support contacts to netstorage doc (Nil Alexandrov) +- Onedrive + - Note that sharepoint also changes web files (.html, .aspx) (GH) +- Putio + - Handle rate limit errors (Berkan Teber) + - Fix multithread download and other ranged requests (rafma0) +- S3 + - Add ChinaMobile EOS to provider list (GuoXingbin) + - Sync providers in config description with providers (Nick + Craig-Wood) +- SFTP + - Fix OpenSSH 8.8+ RSA keys incompatibility (KARBOWSKI Piotr) + - Note that Scaleway C14 is deprecating SFTP in favor of S3 + (Adrien Rey-Jarthon) +- Storj + - Fix bucket creation on Move (Nick Craig-Wood) +- WebDAV + - Don't override Referer if user sets it (Nick Craig-Wood) + v1.58.0 - 2022-03-18 See commits @@ -40460,7 +44247,7 @@ node running rclone would need to have lots of bandwidth. The syncs would be incremental (on a file by file basis). -Eg +e.g. rclone sync -i drive:Folder s3:bucket @@ -40543,7 +44330,7 @@ e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy -Note that the ftp backend does not support ftp_proxy yet. +Note that the FTP backend does not support ftp_proxy yet. Rclone gives x509: failed to load system roots and no roots provided error @@ -41251,6 +45038,57 @@ email addresses removed from here need to be addeed to bin/.ignore-emails to mak - Vincent Murphy vdm@vdm.ie - ctrl-q 34975747+ctrl-q@users.noreply.github.com - Nil Alexandrov nalexand@akamai.com +- GuoXingbin 101376330+guoxingbin@users.noreply.github.com +- Berkan Teber berkan@berkanteber.com +- Tobias Klauser tklauser@distanz.ch +- KARBOWSKI Piotr piotr.karbowski@gmail.com +- GH geeklihui@foxmail.com +- rafma0 int.main@gmail.com +- Adrien Rey-Jarthon jobs@adrienjarthon.com +- Nick Gooding 73336146+nickgooding@users.noreply.github.com +- Leroy van Logchem lr.vanlogchem@gmail.com +- Zsolt Ero zsolt.ero@gmail.com +- Lesmiscore nao20010128@gmail.com +- ehsantdy ehsan.tadayon@arvancloud.com +- SwazRGB 65694696+swazrgb@users.noreply.github.com +- Mateusz Puczyński mati6095@gmail.com +- Michael C Tiernan - MIT-Research Computing Project mtiernan@mit.edu +- Kaspian 34658474+KaspianDev@users.noreply.github.com +- Werner EvilOlaf@users.noreply.github.com +- Hugal31 hugo.laloge@gmail.com +- Christian Galo 36752715+cgalo5758@users.noreply.github.com +- Erik van Velzen erik@evanv.nl +- Derek Battams derek@battams.ca +- SimonLiu simonliu009@users.noreply.github.com +- Hugo Laloge hla@lescompanions.com +- Mr-Kanister 68117355+Mr-Kanister@users.noreply.github.com +- Rob Pickerill r.pickerill@gmail.com +- Andrey to.merge@gmail.com +- Eric Wolf 19wolf@gmail.com +- Nick nick.naumann@mailbox.tu-dresden.de +- Jason Zheng jszheng17@gmail.com +- Matthew Vernon mvernon@wikimedia.org +- Noah Hsu i@nn.ci +- m00594701 mengpengbo@huawei.com +- Art M. Gallagher artmg50@gmail.com +- Sven Gerber 49589423+svengerber@users.noreply.github.com +- CrossR r.cross@lancaster.ac.uk +- Maciej Radzikowski maciej@radzikowski.com.pl +- Scott Grimes scott.grimes@spaciq.com +- Phil Shackleton 71221528+philshacks@users.noreply.github.com +- eNV25 env252525@gmail.com +- Caleb inventor96@users.noreply.github.com +- J-P Treen jp@wraptious.com +- Martin Czygan 53705+miku@users.noreply.github.com +- buda sandrojijavadze@protonmail.com +- mirekphd 36706320+mirekphd@users.noreply.github.com +- vyloy vyloy@qq.com +- Anthrazz 25553648+Anthrazz@users.noreply.github.com +- zzr93 34027824+zzr93@users.noreply.github.com +- Paul Norman penorman@mac.com +- Lorenzo Maiorfi maiorfi@gmail.com +- Claudio Maradonna penguyman@stronzi.org +- Ovidiu Victor Tatar ovi.tatar@googlemail.com Contact the rclone project diff --git a/docs/content/alias.md b/docs/content/alias.md index 69278aa3b..aa041f6f4 100644 --- a/docs/content/alias.md +++ b/docs/content/alias.md @@ -91,7 +91,7 @@ Copy another local directory to the alias directory called source {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to alias (Alias for an existing remote). +Here are the Standard options specific to alias (Alias for an existing remote). #### --alias-remote diff --git a/docs/content/amazonclouddrive.md b/docs/content/amazonclouddrive.md index 478caa5ca..74ff53df6 100644 --- a/docs/content/amazonclouddrive.md +++ b/docs/content/amazonclouddrive.md @@ -160,7 +160,7 @@ rclone it will take you to an `amazon.com` page to log in. Your {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to amazon cloud drive (Amazon Drive). +Here are the Standard options specific to amazon cloud drive (Amazon Drive). #### --acd-client-id @@ -190,7 +190,7 @@ Properties: ### Advanced options -Here are the advanced options specific to amazon cloud drive (Amazon Drive). +Here are the Advanced options specific to amazon cloud drive (Amazon Drive). #### --acd-token diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index fa2205ae5..b9841e0e3 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -158,7 +158,7 @@ untrusted environment such as a CI build server. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-account @@ -255,7 +255,7 @@ Properties: ### Advanced options -Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). #### --azureblob-msi-object-id diff --git a/docs/content/b2.md b/docs/content/b2.md index 1df9298d4..bbc8a17c6 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -328,7 +328,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to b2 (Backblaze B2). +Here are the Standard options specific to b2 (Backblaze B2). #### --b2-account @@ -365,7 +365,7 @@ Properties: ### Advanced options -Here are the advanced options specific to b2 (Backblaze B2). +Here are the Advanced options specific to b2 (Backblaze B2). #### --b2-endpoint @@ -415,6 +415,20 @@ Properties: - Type: bool - Default: false +#### --b2-version-at + +Show file versions as they were at the specified time. + +Note that when using this no file write operations are permitted, +so you can't upload files or delete them. + +Properties: + +- Config: version_at +- Env Var: RCLONE_B2_VERSION_AT +- Type: Time +- Default: off + #### --b2-upload-cutoff Cutoff for switching to chunked upload. diff --git a/docs/content/box.md b/docs/content/box.md index d0cdb603d..9da20764a 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -267,7 +267,7 @@ the `root_folder_id` in the config. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to box (Box). +Here are the Standard options specific to box (Box). #### --box-client-id @@ -341,7 +341,7 @@ Properties: ### Advanced options -Here are the advanced options specific to box (Box). +Here are the Advanced options specific to box (Box). #### --box-token diff --git a/docs/content/cache.md b/docs/content/cache.md index 76e8fa4f0..592f9eef4 100644 --- a/docs/content/cache.md +++ b/docs/content/cache.md @@ -307,7 +307,7 @@ Params: {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to cache (Cache a remote). +Here are the Standard options specific to cache (Cache a remote). #### --cache-remote @@ -423,7 +423,7 @@ Properties: ### Advanced options -Here are the advanced options specific to cache (Cache a remote). +Here are the Advanced options specific to cache (Cache a remote). #### --cache-plex-token @@ -672,7 +672,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command diff --git a/docs/content/changelog.md b/docs/content/changelog.md index fab1db463..1cb4255b6 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -5,6 +5,165 @@ description: "Rclone Changelog" # Changelog +## v1.59.0 - 2022-07-09 + +[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0) + +* New backends + * [Combine](/combine) multiple remotes in one directory tree (Nick Craig-Wood) + * [Hidrive](/hidrive/) (Ovidiu Victor Tatar) + * [Internet Archive](/internetarchive/) (Lesmiscore (Naoya Ozaki)) + * New S3 providers + * [ArvanCloud AOS](/s3/#arvan-cloud) (ehsantdy) + * [Cloudflare R2](/s3/#cloudflare-r2) (Nick Craig-Wood) + * [Huawei OBS](/s3/#huawei-obs) (m00594701) + * [IDrive e2](/s3/#idrive-e2) (vyloy) +* New commands + * [test makefile](/commands/rclone_test_makefile/): Create a single file for testing (Nick Craig-Wood) +* New Features + * [Metadata framework](/docs/#metadata) to read and write system and user metadata on backends (Nick Craig-Wood) + * Implemented initially for `local`, `s3` and `internetarchive` backends + * `--metadata`/`-M` flag to control whether metadata is copied + * `--metadata-set` flag to specify metadata for uploads + * Thanks to [Manz Solutions](https://manz-solutions.at/) for sponsoring this work. + * build + * Update to go1.18 and make go1.16 the minimum required version (Nick Craig-Wood) + * Update android go build to 1.18.x and NDK to 23.1.7779620 (Nick Craig-Wood) + * All windows binaries now no longer CGO (Nick Craig-Wood) + * Add `linux/arm/v6` to docker images (Nick Craig-Wood) + * A huge number of fixes found with [staticcheck](https://staticcheck.io/) (albertony) + * Configurable version suffix independent of version number (albertony) + * check: Implement `--no-traverse` and `--no-unicode-normalization` (Nick Craig-Wood) + * config: Readability improvements (albertony) + * copyurl: Add `--header-filename` to honor the HTTP header filename directive (J-P Treen) + * filter: Allow multiple `--exclude-if-present` flags (albertony) + * fshttp: Add `--disable-http-keep-alives` to disable HTTP Keep Alives (Nick Craig-Wood) + * install.sh + * Set the modes on the files and/or directories on macOS (Michael C Tiernan - MIT-Research Computing Project) + * Pre verify sudo authorization `-v` before calling curl. (Michael C Tiernan - MIT-Research Computing Project) + * lib/encoder: Add Semicolon encoding (Nick Craig-Wood) + * lsf: Add metadata support with `M` flag (Nick Craig-Wood) + * lsjson: Add `--metadata`/`-M` flag (Nick Craig-Wood) + * ncdu + * Implement multi selection (CrossR) + * Replace termbox with tcell's termbox wrapper (eNV25) + * Display correct path in delete confirmation dialog (Roberto Ricci) + * operations + * Speed up hash checking by aborting the other hash if first returns nothing (Nick Craig-Wood) + * Use correct src/dst in some log messages (zzr93) + * rcat: Check checksums by default like copy does (Nick Craig-Wood) + * selfupdate: Replace deprecated `x/crypto/openpgp` package with `ProtonMail/go-crypto` (albertony) + * serve ftp: Check `--passive-port` arguments are correct (Nick Craig-Wood) + * size: Warn about inaccurate results when objects with unknown size (albertony) + * sync: Overlap check is now filter-sensitive so `--backup-dir` can be in the root provided it is filtered (Nick) + * test info: Check file name lengths using 1,2,3,4 byte unicode characters (Nick Craig-Wood) + * test makefile(s): `--sparse`, `--zero`, `--pattern`, `--ascii`, `--chargen` flags to control file contents (Nick Craig-Wood) + * Make sure we call the `Shutdown` method on backends (Martin Czygan) +* Bug Fixes + * accounting: Fix unknown length file transfers counting 3 transfers each (buda) + * ncdu: Fix issue where dir size is summed when file sizes are -1 (albertony) + * sync/copy/move + * Fix `--fast-list` `--create-empty-src-dirs` and `--exclude` (Nick Craig-Wood) + * Fix `--max-duration` and `--cutoff-mode soft` (Nick Craig-Wood) + * Fix fs cache unpin (Martin Czygan) + * Set proper exit code for errors that are not low-level retried (e.g. size/timestamp changing) (albertony) +* Mount + * Support `windows/arm64` (may still be problems - see [#5828](https://github.com/rclone/rclone/issues/5828)) (Nick Craig-Wood) + * Log IO errors at ERROR level (Nick Craig-Wood) + * Ignore `_netdev` mount argument (Hugal31) +* VFS + * Add `--vfs-fast-fingerprint` for less accurate but faster fingerprints (Nick Craig-Wood) + * Add `--vfs-disk-space-total-size` option to manually set the total disk space (Claudio Maradonna) + * vfscache: Fix fatal error: sync: unlock of unlocked mutex error (Nick Craig-Wood) +* Local + * Fix parsing of `--local-nounc` flag (Nick Craig-Wood) + * Add Metadata support (Nick Craig-Wood) +* Crypt + * Support metadata (Nick Craig-Wood) +* Azure Blob + * Calculate Chunksize/blocksize to stay below maxUploadParts (Leroy van Logchem) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) + * Case insensitive access tier (Rob Pickerill) + * Allow remote emulator (azurite) (Lorenzo Maiorfi) +* B2 + * Add `--b2-version-at` flag to show file versions at time specified (SwazRGB) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) +* Chunker + * Mark as not supporting metadata (Nick Craig-Wood) +* Compress + * Support metadata (Nick Craig-Wood) +* Drive + * Make `backend config -o config` add a combined `AllDrives:` remote (Nick Craig-Wood) + * Make `--drive-shared-with-me` work with shared drives (Nick Craig-Wood) + * Add `--drive-resource-key` for accessing link-shared files (Nick Craig-Wood) + * Add backend commands `exportformats` and `importformats` for debugging (Nick Craig-Wood) + * Fix 404 errors on copy/server side copy objects from public folder (Nick Craig-Wood) + * Update Internal OAuth consent screen docs (Phil Shackleton) + * Moved `root_folder_id` to advanced section (Abhiraj) +* Dropbox + * Migrate from deprecated api (m8rge) + * Add logs to show when poll interval limits are exceeded (Nick Craig-Wood) + * Fix nil pointer exception on dropbox impersonate user not found (Nick Craig-Wood) +* Fichier + * Parse api error codes and them accordingly (buengese) +* FTP + * Add support for `disable_utf8` option (Jason Zheng) + * Revert to upstream `github.com/jlaffaye/ftp` from our fork (Nick Craig-Wood) +* Google Cloud Storage + * Add `--gcs-no-check-bucket` to minimise transactions and perms (Nick Gooding) + * Add `--gcs-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood) + * by default these will be downloaded compressed (which previously failed) +* Hasher + * Support metadata (Nick Craig-Wood) +* HTTP + * Fix missing response when using custom auth handler (albertony) +* Jottacloud + * Add support for upload to custom device and mountpoint (albertony) + * Always store username in config and use it to avoid initial API request (albertony) + * Fix issue with server-side copy when destination is in trash (albertony) + * Fix listing output of remote with special characters (albertony) +* Mailru + * Fix timeout by using int instead of time.Duration for keeping number of seconds (albertony) +* Mega + * Document using MEGAcmd to help with login failures (Art M. Gallagher) +* Onedrive + * Implement `--poll-interval` for onedrive (Hugo Laloge) + * Add access scopes option (Sven Gerber) +* Opendrive + * Resolve lag and truncate bugs (Scott Grimes) +* Pcloud + * Fix about with no free space left (buengese) + * Fix cleanup (buengese) +* S3 + * Use PUT Object instead of presigned URLs to upload single part objects (Nick Craig-Wood) + * Backend restore command to skip non-GLACIER objects (Vincent Murphy) + * Use chunksize lib to determine chunksize dynamically (Derek Battams) + * Retry RequestTimeout errors (Nick Craig-Wood) + * Implement reading and writing of metadata (Nick Craig-Wood) +* SFTP + * Add support for about and hashsum on windows server (albertony) + * Use vendor-specific VFS statistics extension for about if available (albertony) + * Add `--sftp-chunk-size` to control packets sizes for high latency links (Nick Craig-Wood) + * Add `--sftp-concurrency` to improve high latency transfers (Nick Craig-Wood) + * Add `--sftp-set-env` option to set environment variables (Nick Craig-Wood) + * Add Hetzner Storage Boxes to supported sftp backends (Anthrazz) +* Storj + * Fix put which lead to the file being unreadable when using mount (Erik van Velzen) +* Union + * Add `min_free_space` option for `lfs`/`eplfs` policies (Nick Craig-Wood) + * Fix uploading files to union of all bucket based remotes (Nick Craig-Wood) + * Fix get free space for remotes which don't support it (Nick Craig-Wood) + * Fix `eplus` policy to select correct entry for existing files (Nick Craig-Wood) + * Support metadata (Nick Craig-Wood) +* Uptobox + * Fix root path handling (buengese) +* WebDAV + * Add SharePoint in other specific regions support (Noah Hsu) +* Yandex + * Handle api error on server-side move (albertony) +* Zoho + * Add Japan and China regions (buengese) + ## v1.58.1 - 2022-04-29 [See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.58.1) diff --git a/docs/content/chunker.md b/docs/content/chunker.md index a4ee805d0..4ba8491db 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -313,7 +313,7 @@ Changing `transactions` is dangerous and requires explicit migration. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to chunker (Transparently chunk/split large files). +Here are the Standard options specific to chunker (Transparently chunk/split large files). #### --chunker-remote @@ -372,7 +372,7 @@ Properties: ### Advanced options -Here are the advanced options specific to chunker (Transparently chunk/split large files). +Here are the Advanced options specific to chunker (Transparently chunk/split large files). #### --chunker-name-format diff --git a/docs/content/combine.md b/docs/content/combine.md index 98e174a04..c3b5ed322 100644 --- a/docs/content/combine.md +++ b/docs/content/combine.md @@ -127,7 +127,7 @@ See [the Google Drive docs](/drive/#drives) for full info. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/combine/combine.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to combine (Combine several remotes into one). +Here are the Standard options specific to combine (Combine several remotes into one). #### --combine-upstreams @@ -153,4 +153,10 @@ Properties: - Type: SpaceSepList - Default: +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](/docs/#metadata) docs for more info. + {{< rem autogenerated options stop >}} diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index e1d88dd8e..e98dc7443 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -42,7 +42,7 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. * [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. -* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files. * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files. diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index 25a27ab83..112633fab 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -16,6 +16,10 @@ Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination. +For the [crypt](/crypt/) remote there is a dedicated command, +[cryptcheck](/commands/rclone_cryptcheck/), that are able to check +the checksums of the crypted files. + If you supply the `--size-only` flag, it will only compare the sizes not the hashes as well. Use this for a quick check. diff --git a/docs/content/commands/rclone_completion.md b/docs/content/commands/rclone_completion.md index b30d71dfe..9193c9868 100644 --- a/docs/content/commands/rclone_completion.md +++ b/docs/content/commands/rclone_completion.md @@ -1,17 +1,16 @@ --- title: "rclone completion" -description: "generate the autocompletion script for the specified shell" +description: "Generate the autocompletion script for the specified shell" slug: rclone_completion url: /commands/rclone_completion/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs" --- # rclone completion -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell ## Synopsis - Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script. @@ -27,8 +26,8 @@ See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -* [rclone completion bash](/commands/rclone_completion_bash/) - generate the autocompletion script for bash -* [rclone completion fish](/commands/rclone_completion_fish/) - generate the autocompletion script for fish -* [rclone completion powershell](/commands/rclone_completion_powershell/) - generate the autocompletion script for powershell -* [rclone completion zsh](/commands/rclone_completion_zsh/) - generate the autocompletion script for zsh +* [rclone completion bash](/commands/rclone_completion_bash/) - Generate the autocompletion script for bash +* [rclone completion fish](/commands/rclone_completion_fish/) - Generate the autocompletion script for fish +* [rclone completion powershell](/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell +* [rclone completion zsh](/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh diff --git a/docs/content/commands/rclone_completion_bash.md b/docs/content/commands/rclone_completion_bash.md index ce59b639f..b3c24a6e4 100644 --- a/docs/content/commands/rclone_completion_bash.md +++ b/docs/content/commands/rclone_completion_bash.md @@ -1,33 +1,37 @@ --- title: "rclone completion bash" -description: "generate the autocompletion script for bash" +description: "Generate the autocompletion script for bash" slug: rclone_completion_bash url: /commands/rclone_completion_bash/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs" --- # rclone completion bash -generate the autocompletion script for bash +Generate the autocompletion script for bash ## Synopsis - Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: -$ source <(rclone completion bash) + + source <(rclone completion bash) To load completions for every new session, execute once: -Linux: - $ rclone completion bash > /etc/bash_completion.d/rclone -MacOS: - $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone + +### Linux: + + rclone completion bash > /etc/bash_completion.d/rclone + +### macOS: + + rclone completion bash > /usr/local/etc/bash_completion.d/rclone You will need to start a new shell for this setup to take effect. - + ``` rclone completion bash @@ -44,5 +48,5 @@ See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO -* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell diff --git a/docs/content/commands/rclone_completion_fish.md b/docs/content/commands/rclone_completion_fish.md index 62645463b..5e09dadfb 100644 --- a/docs/content/commands/rclone_completion_fish.md +++ b/docs/content/commands/rclone_completion_fish.md @@ -1,24 +1,25 @@ --- title: "rclone completion fish" -description: "generate the autocompletion script for fish" +description: "Generate the autocompletion script for fish" slug: rclone_completion_fish url: /commands/rclone_completion_fish/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs" --- # rclone completion fish -generate the autocompletion script for fish +Generate the autocompletion script for fish ## Synopsis - Generate the autocompletion script for the fish shell. To load completions in your current shell session: -$ rclone completion fish | source + + rclone completion fish | source To load completions for every new session, execute once: -$ rclone completion fish > ~/.config/fish/completions/rclone.fish + + rclone completion fish > ~/.config/fish/completions/rclone.fish You will need to start a new shell for this setup to take effect. @@ -38,5 +39,5 @@ See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO -* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell diff --git a/docs/content/commands/rclone_completion_powershell.md b/docs/content/commands/rclone_completion_powershell.md index 9bd523e76..8dfbafc45 100644 --- a/docs/content/commands/rclone_completion_powershell.md +++ b/docs/content/commands/rclone_completion_powershell.md @@ -1,21 +1,21 @@ --- title: "rclone completion powershell" -description: "generate the autocompletion script for powershell" +description: "Generate the autocompletion script for powershell" slug: rclone_completion_powershell url: /commands/rclone_completion_powershell/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/powershell/ and as part of making a release run "make commanddocs" --- # rclone completion powershell -generate the autocompletion script for powershell +Generate the autocompletion script for powershell ## Synopsis - Generate the autocompletion script for powershell. To load completions in your current shell session: -PS C:\> rclone completion powershell | Out-String | Invoke-Expression + + rclone completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. @@ -36,5 +36,5 @@ See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO -* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell diff --git a/docs/content/commands/rclone_completion_zsh.md b/docs/content/commands/rclone_completion_zsh.md index 2e487e674..b48faa25a 100644 --- a/docs/content/commands/rclone_completion_zsh.md +++ b/docs/content/commands/rclone_completion_zsh.md @@ -1,29 +1,32 @@ --- title: "rclone completion zsh" -description: "generate the autocompletion script for zsh" +description: "Generate the autocompletion script for zsh" slug: rclone_completion_zsh url: /commands/rclone_completion_zsh/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs" --- # rclone completion zsh -generate the autocompletion script for zsh +Generate the autocompletion script for zsh ## Synopsis - Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: -$ echo "autoload -U compinit; compinit" >> ~/.zshrc + echo "autoload -U compinit; compinit" >> ~/.zshrc To load completions for every new session, execute once: -# Linux: -$ rclone completion zsh > "${fpath[1]}/_rclone" -# macOS: -$ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone + +### Linux: + + rclone completion zsh > "${fpath[1]}/_rclone" + +### macOS: + + rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone You will need to start a new shell for this setup to take effect. @@ -43,5 +46,5 @@ See the [global flags page](/flags/) for global options not listed here. ## SEE ALSO -* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell +* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 163b0289b..022f3e7b0 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -14,13 +14,18 @@ Copy files from source to dest, skipping identical files. Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification -time or MD5SUM. Doesn't delete files from the destination. +time or MD5SUM. Doesn't delete files from the destination. If you +want to also delete files from destination, to make it match source, +use the [sync](/commands/rclone_sync/) command instead. Note that it is always the contents of the directory that is synced, -not the directory so when source:path is a directory, it's the +not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. +To copy single files, use the [copyto](/commands/rclone_copyto/) +command instead. + If dest:path doesn't exist, it is created and the source:path contents go there. diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 93da1fea8..a5cfa974e 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -16,8 +16,8 @@ If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current -name. If the source is a directory then it acts exactly like the copy -command. +name. If the source is a directory then it acts exactly like the +[copy](/commands/rclone_copy/) command. So diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index 928bec599..1188f0647 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -15,10 +15,11 @@ Copy url content to dest. Download a URL's content and copy it to the destination without saving it in temporary storage. -Setting `--auto-filename` will cause the file name to be retrieved from -the URL (after any redirections) and used in the destination -path. With `--print-filename` in addition, the resulting file name will -be printed. +Setting `--auto-filename` will attempt to automatically determine the filename from the URL +(after any redirections) and used in the destination path. +With `--auto-filename-header` in +addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. +With `--print-filename` in addition, the resulting file name will be printed. Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. @@ -34,11 +35,12 @@ rclone copyurl https://example.com dest:path [flags] ## Options ``` - -a, --auto-filename Get the file name from the URL and use it for destination file path - -h, --help help for copyurl - --no-clobber Prevent overwriting file with same name - -p, --print-filename Print the resulting name from --auto-filename - --stdout Write the output to stdout rather than a file + -a, --auto-filename Get the file name from the URL and use it for destination file path + --header-filename Get the file name from the Content-Disposition header + -h, --help help for copyurl + --no-clobber Prevent overwriting file with same name + -p, --print-filename Print the resulting name from --auto-filename + --stdout Write the output to stdout rather than a file ``` See the [global flags page](/flags/) for global options not listed here. diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index cf4d701ad..3e81e53e9 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -12,9 +12,9 @@ Cryptcheck checks the integrity of a crypted remote. ## Synopsis -rclone cryptcheck checks a remote against a crypted remote. This is -the equivalent of running rclone check, but able to check the -checksums of the crypted remote. +rclone cryptcheck checks a remote against a [crypted](/crypt/) remote. +This is the equivalent of running rclone [check](/commands/rclone_check/), +but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md index b2e5623f0..c4836ec8d 100644 --- a/docs/content/commands/rclone_cryptdecode.md +++ b/docs/content/commands/rclone_cryptdecode.md @@ -15,7 +15,7 @@ Cryptdecode returns unencrypted file names. rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. -If you supply the --reverse flag, it will return encrypted file names. +If you supply the `--reverse` flag, it will return encrypted file names. use it like this @@ -23,8 +23,8 @@ use it like this rclone cryptdecode --reverse encryptedremote: filename1 filename2 -Another way to accomplish this is by using the `rclone backend encode` (or `decode`)command. -See the documentation on the `crypt` overlay for more info. +Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command. +See the documentation on the [crypt](/crypt/) overlay for more info. ``` diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 345bec0d0..6b77f17cb 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -22,7 +22,7 @@ Opendrive) that can have duplicate file names. It can be run on wrapping backend (e.g. crypt) if they wrap a backend which supports duplicate file names. -However if --by-hash is passed in then dedupe will find files with +However if `--by-hash` is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash. diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 0951011f5..09076a46a 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -12,16 +12,16 @@ Remove the files in path. ## Synopsis -Remove the files in path. Unlike `purge` it obeys include/exclude -filters so can be used to selectively delete files. +Remove the files in path. Unlike [purge](/commands/rclone_purge/) it +obeys include/exclude filters so can be used to selectively delete files. `rclone delete` only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use -the `purge` command. +the [purge](/commands/rclone_purge/) command. If you supply the `--rmdirs` flag, it will remove all empty directories along with it. -You can also use the separate command `rmdir` or `rmdirs` to -delete empty directories only. +You can also use the separate command [rmdir](/commands/rclone_rmdir/) or +[rmdirs](/commands/rclone_rmdirs/) to delete empty directories only. For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either): diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index c55828cc1..3838dda4a 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -13,7 +13,7 @@ Output completion script for a given shell. Generates a shell completion script for rclone. -Run with --help to list the supported shells. +Run with `--help` to list the supported shells. ## Options diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md index 4c85d8b66..0c34c0cd6 100644 --- a/docs/content/commands/rclone_hashsum.md +++ b/docs/content/commands/rclone_hashsum.md @@ -21,6 +21,9 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote. +For the MD5 and SHA1 algorithms there are also dedicated commands, +[md5sum](/commands/rclone_md5sum/) and [sha1sum](/commands/rclone_sha1sum/). + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, @@ -36,6 +39,7 @@ Run without a hash to see the list of all supported hashes, e.g. * crc32 * sha256 * dropbox + * hidrive * mailru * quickxor diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index 98fe86311..54e4317c8 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -14,7 +14,7 @@ List all the remotes in the config file. rclone listremotes lists all the available remotes from the config file. -When uses with the -l flag it lists the types too. +When used with the `--long` flag it lists the types too. ``` diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index cccd67b0f..fbd9b2c92 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -13,7 +13,7 @@ List all directories/containers/buckets in the path. Lists the directories in the source path to standard output. Does not -recurse by default. Use the -R flag to recurse. +recurse by default. Use the `-R` flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the @@ -31,7 +31,7 @@ Or -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files -If you just want the directory names use "rclone lsf --dirs-only". +If you just want the directory names use `rclone lsf --dirs-only`. Any of the filtering options can be applied to this command. diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index a48a737fe..2cdd3ce5c 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -26,7 +26,7 @@ Eg ferejej3gux/ fubuwic -Use the --format option to control what gets listed. By default this +Use the `--format` option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -39,9 +39,10 @@ output: m - MimeType of object if known e - encrypted name T - tier of storage if known, e.g. "Hot" or "Cool" + M - Metadata of object in JSON blob format, eg {"key":"value"} So if you wanted the path, size and modification time, you would use ---format "pst", or maybe --format "tsp" to put the path last. +`--format "pst"`, or maybe `--format "tsp"` to put the path last. Eg @@ -53,7 +54,7 @@ Eg 2016-06-25 18:55:40;37600;fubuwic If you specify "h" in the format you will get the MD5 hash by default, -use the "--hash" flag to change which hash you want. Note that this +use the `--hash` flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash @@ -75,7 +76,7 @@ Eg (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the ---separator flag. Note that separators aren't escaped in the path so +`--separator` flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg @@ -97,8 +98,8 @@ Eg test.sh,449 "this file contains a comma, in the file name.txt",6 -Note that the --absolute parameter is useful for making lists of files -to pass to an rclone copy with the --files-from-raw flag. +Note that the `--absolute` parameter is useful for making lists of files +to pass to an rclone copy with the `--files-from-raw` flag. For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index 79bdcbafd..abf8a39ca 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -15,7 +15,7 @@ List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this - { + { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", @@ -33,29 +33,32 @@ The output is an array of Items, where each Item looks like this "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", - } + } -If --hash is not specified the Hashes property won't be emitted. The -types of hash can be specified with the --hash-type parameter (which -may be repeated). If --hash-type is set then it implies --hash. +If `--hash` is not specified the Hashes property won't be emitted. The +types of hash can be specified with the `--hash-type` parameter (which +may be repeated). If `--hash-type` is set then it implies `--hash`. -If --no-modtime is specified then ModTime will be blank. This can +If `--no-modtime` is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift). -If --no-mimetype is specified then MimeType will be blank. This can +If `--no-mimetype` is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift). -If --encrypted is not specified the Encrypted won't be emitted. +If `--encrypted` is not specified the Encrypted won't be emitted. -If --dirs-only is not specified files in addition to directories are +If `--dirs-only` is not specified files in addition to directories are returned -If --files-only is not specified directories in addition to the files +If `--files-only` is not specified directories in addition to the files will be returned. -if --stat is set then a single JSON blob will be returned about the +If `--metadata` is set then an additional Metadata key will be returned. +This will have metdata in rclone standard format as a JSON object. + +if `--stat` is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't @@ -64,7 +67,7 @@ possible to tell empty directories from missing directories there. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". -When used without --recursive the Path will always be the same as Name. +When used without `--recursive` the Path will always be the same as Name. If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is @@ -112,7 +115,7 @@ rclone lsjson remote:path [flags] ``` --dirs-only Show only directories in the listing - -M, --encrypted Show the encrypted names + --encrypted Show the encrypted names --files-only Show only files in the listing --hash Include hashes in the output (may take longer) --hash-type stringArray Show only this hash type (may be repeated) diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index ea58ff627..de2b68fb0 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -20,6 +20,10 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote. +For other algorithms, see the [hashsum](/commands/rclone_hashsum/) +command. Running `rclone md5sum remote:path` is equivalent +to running `rclone hashsum MD5 remote:path`. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 8a7d3cefd..2b88e0720 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -75,10 +75,10 @@ at all, then 1 PiB is set as both the total and the free size. To run rclone mount on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). -[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source +[WinFsp](https://github.com/winfsp/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone -uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). +uses combination with [cgofuse](https://github.com/winfsp/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. @@ -228,7 +228,7 @@ from Microsoft's Sysinternals suite, which has option `-s` to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the -[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)). +[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the [`--config`](https://rclone.org/docs/#config-config-file) option. @@ -410,7 +410,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -567,6 +567,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -607,7 +639,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -619,7 +651,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -636,28 +668,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -705,7 +744,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -713,6 +752,8 @@ rclone mount remote:path /path/to/mountpoint [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index 0a53c1eea..ce97800a2 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -16,6 +16,9 @@ Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation. +To move single files, use the [moveto](/commands/rclone_moveto/) +command instead. + If no filters are in use and if possible this will server-side move `source:path` into `dest:path`. After this `source:path` will no longer exist. @@ -26,7 +29,8 @@ move will be used, otherwise it will copy it (server-side if possible) into `dest:path` then delete the original (if no errors on copy) in `source:path`. -If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. +If you want to delete empty source directories after move, use the +`--delete-empty-src-dirs` flag. See the [--no-traverse](/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index 7ba6f6545..6f8488f83 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -17,7 +17,7 @@ directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly -like the move command. +like the [move](/commands/rclone_move/) command. So diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 6e48ee2b8..e1f51e604 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -23,7 +23,8 @@ builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. -Here are the keys - press '?' to toggle the help on and off +You can interact with the user interface using key presses, +press '?' to toggle the help on and off. The supported keys are: ↑,↓ or k,j to Move →,l to enter @@ -34,19 +35,41 @@ Here are the keys - press '?' to toggle the help on and off u toggle human-readable format n,s,C,A sort by name,size,count,average size d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories y copy current path to clipboard Y display current path - ^L refresh screen + ^L refresh screen (fix screen corruption) ? to toggle help on and off - q/ESC/c-C to quit + q/ESC/^c to quit + +Listed files/directories may be prefixed by a one-character flag, +some of them combined with a description in brackes at end of line. +These flags have the following meaning: + + e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) + ~ means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). + . means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) + ! means an error occurred while reading this directory This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. -Note that it might take some time to delete big files/folders. The +Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously. +For a non-interactive listing of the remote, see the +[tree](/commands/rclone_tree/) command. To just get the total size of +the remote you can also use the [size](/commands/rclone_size/) command. + ``` rclone ncdu remote:path [flags] diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index f9eda751b..0a77772a8 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -26,7 +26,7 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. -echo "secretpassword" | rclone obscure - + echo "secretpassword" | rclone obscure - If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index c15c10320..6b87acc56 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -13,9 +13,10 @@ Remove the path and all of its contents. Remove the path and all of its contents. Note that this does not obey -include/exclude filters - everything will be removed. Use the `delete` -command if you want to selectively delete files. To delete empty directories only, -use command `rmdir` or `rmdirs`. +include/exclude filters - everything will be removed. Use the +[delete](/commands/rclone_delete/) command if you want to selectively +delete files. To delete empty directories only, use command +[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/). **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md index ae3df6f6c..4d2dbb07f 100644 --- a/docs/content/commands/rclone_rc.md +++ b/docs/content/commands/rclone_rc.md @@ -13,26 +13,26 @@ Run a command against a running rclone. -This runs a command against a running rclone. Use the --url flag to +This runs a command against a running rclone. Use the `--url` flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" -A username and password can be passed in with --user and --pass. +A username and password can be passed in with `--user` and `--pass`. -Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, ---user, --pass. +Note that `--rc-addr`, `--rc-user`, `--rc-pass` will be read also for +`--url`, `--user`, `--pass`. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. -The --json parameter can be used to pass in a JSON blob as an input +The `--json` parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. -The -o/--opt option can be used to set a key "opt" with key, value -options in the form "-o key=value" or "-o key". It can be repeated as +The `-o`/`--opt` option can be used to set a key "opt" with key, value +options in the form `-o key=value` or `-o key`. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. @@ -43,7 +43,7 @@ Will place this in the "opt" value {"key":"value", "key2","") -The -a/--arg option can be used to set strings in the "arg" value. It +The `-a`/`--arg` option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. @@ -54,13 +54,13 @@ Will place this in the "arg" value ["value", "value2"] -Use --loopback to connect to the rclone instance running "rclone rc". +Use `--loopback` to connect to the rclone instance running `rclone rc`. This is very useful for testing commands without having to run an rclone rc server, e.g.: rclone rc --loopback operations/about fs=/ -Use "rclone rc" to see a list of all possible commands. +Use `rclone rc` to see a list of all possible commands. ``` rclone rc commands parameter [flags] diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md index 2085e2c23..86d4c793e 100644 --- a/docs/content/commands/rclone_rcat.md +++ b/docs/content/commands/rclone_rcat.md @@ -30,11 +30,11 @@ must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance. -Use the |--size| flag to preallocate the file in advance at the remote end +Use the `--size` flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming. -|--size| should be the exact size of the input stream in bytes. If the -size of the stream is different in length to the |--size| passed in +`--size` should be the exact size of the input stream in bytes. If the +size of the stream is different in length to the `--size` passed in then the transfer will likely fail. Note that the upload can also not be retried because the data is diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index f9d84cdfd..c8424e24a 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -14,10 +14,10 @@ Remove the empty directory at path. This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use -command `rmdirs` (or `delete` with option `--rmdirs`) -to do that. +command [rmdirs](/commands/rclone_rmdirs/) (or [delete](/commands/rclone_delete/) +with option `--rmdirs`) to do that. -To delete a path and any objects in it, use `purge` command. +To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command. ``` diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index 1e90a055b..ba5dc56fc 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -17,15 +17,16 @@ that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the `--leave-root` flag. -Use command `rmdir` to delete just the empty directory -given by path, not recurse. +Use command [rmdir](/commands/rclone_rmdir/) to delete just the empty +directory given by path, not recurse. This is useful for tidying up remotes that rclone has left a lot of -empty directories in. For example the `delete` command will -delete files but leave the directory structure (unless used with -option `--rmdirs`). +empty directories in. For example the [delete](/commands/rclone_delete/) +command will delete files but leave the directory structure (unless +used with option `--rmdirs`). -To delete a path and any objects in it, use `purge` command. +To delete a path and any objects in it, use [purge](/commands/rclone_purge/) +command. ``` diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md index 7663f10fb..12d8141a8 100644 --- a/docs/content/commands/rclone_serve.md +++ b/docs/content/commands/rclone_serve.md @@ -11,8 +11,8 @@ Serve a remote over a protocol. ## Synopsis -rclone serve is used to serve a remote over a given protocol. This -command requires the use of a subcommand to specify the protocol, e.g. +Serve a remote over a given protocol. Requires the use of a +subcommand to specify the protocol, e.g. rclone serve http remote: @@ -40,5 +40,5 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP. -* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. +* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV. diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md index 0ab69644e..0343debb4 100644 --- a/docs/content/commands/rclone_serve_dlna.md +++ b/docs/content/commands/rclone_serve_dlna.md @@ -11,14 +11,16 @@ Serve remote:path over DLNA ## Synopsis -rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many -devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN -and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast -packets (SSDP) and will thus only work on LANs. +Run a DLNA media server for media stored in an rclone remote. Many +devices, such as the Xbox and PlayStation, can automatically discover +this server in the LAN and play audio/video from it. VLC is also +supported. Service discovery uses UDP multicast packets (SSDP) and +will thus only work on LANs. -Rclone will list all files present in the remote, without filtering based on media formats or -file extensions. Additionally, there is no media transcoding support. This means that some -players might show files that they are not able to play back correctly. +Rclone will list all files present in the remote, without filtering +based on media formats or file extensions. Additionally, there is no +media transcoding support. This means that some players might show +files that they are not able to play back correctly. ## Server options @@ -51,7 +53,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -208,6 +210,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -248,7 +282,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -260,7 +294,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -277,28 +311,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -332,7 +373,7 @@ rclone serve dlna remote:path [flags] --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -340,6 +381,8 @@ rclone serve dlna remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md index 976c7a710..b294968a0 100644 --- a/docs/content/commands/rclone_serve_docker.md +++ b/docs/content/commands/rclone_serve_docker.md @@ -69,7 +69,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -226,6 +226,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -266,7 +298,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -278,7 +310,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -295,28 +327,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -367,7 +406,7 @@ rclone serve docker [flags] --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --socket-addr string Address or absolute path (default: /run/docker/plugins/rclone.sock) --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) @@ -377,6 +416,8 @@ rclone serve docker [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index 5dfd4633f..3274a92c4 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -12,9 +12,9 @@ Serve remote:path over FTP. ## Synopsis -rclone serve ftp implements a basic ftp server to serve the -remote over FTP protocol. This can be viewed with a ftp client -or you can make a remote of type ftp to read and write it. +Run a basic FTP server to serve a remote over FTP protocol. +This can be viewed with a FTP client or you can make a remote of +type FTP to read and write it. ## Server options @@ -50,7 +50,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -207,6 +207,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -247,7 +279,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -259,7 +291,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -276,28 +308,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -416,7 +455,7 @@ rclone serve ftp remote:path [flags] --passive-port string Passive port range to use (default "30000-32000") --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) --public-ip string Public IP address to advertise for passive connections - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") @@ -425,6 +464,8 @@ rclone serve ftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index 418c18377..329bc1420 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -11,59 +11,59 @@ Serve the remote over HTTP. ## Synopsis -rclone serve http implements a basic web server to serve the remote -over HTTP. This can be viewed in a web browser or you can make a -remote of type http read from it. +Run a basic web server to serve a remote over HTTP. +This can be viewed in a web browser or you can make a remote of type +http read from it. -You can use the filter flags (e.g. --include, --exclude) to control what +You can use the filter flags (e.g. `--include`, `--exclude`) to control what is served. -The server will log errors. Use -v to see access logs. +The server will log errors. Use `-v` to see access logs. ---bwlimit will be respected for file transfers. Use --stats to +`--bwlimit` will be respected for file transfers. Use `--stats` to control the stats printing. ## Server options -Use --addr to specify which IP address and port the server should -listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +Use `--addr` to specify which IP address and port the server should +listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ### SSL/TLS By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +https. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be a either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be a either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. ### Template ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -90,9 +90,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -104,9 +104,9 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. -Use --salt to change the password hashing salt from the default. +Use `--salt` to change the password hashing salt from the default. ## VFS - Virtual File System @@ -126,7 +126,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -283,6 +283,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -323,7 +355,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -335,7 +367,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -352,28 +384,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -412,7 +451,7 @@ rclone serve http remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication --salt string Password hashing salt (default "dlPL2MqE") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) @@ -426,6 +465,8 @@ rclone serve http remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md index 2dec5ed5a..881e697f0 100644 --- a/docs/content/commands/rclone_serve_restic.md +++ b/docs/content/commands/rclone_serve_restic.md @@ -11,8 +11,8 @@ Serve the remote for restic's REST API. ## Synopsis -rclone serve restic implements restic's REST backend API -over HTTP. This allows restic to use rclone as a data storage +Run a basic web server to serve a remove over restic's REST backend +API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. [Restic](https://restic.net/) is a command-line program for doing @@ -20,8 +20,8 @@ backups. The server will log errors. Use -v to see access logs. ---bwlimit will be respected for file transfers. Use --stats to -control the stats printing. +`--bwlimit` will be respected for file transfers. +Use `--stats` to control the stats printing. ## Setting up rclone for use by restic ### @@ -40,11 +40,11 @@ Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this -with use of the "--addr" flag. +with use of the `--addr` flag. You might wish to start this server on boot. -Adding --cache-objects=false will cause rclone to stop caching objects +Adding `--cache-objects=false` will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory. @@ -90,36 +90,36 @@ these **must** end with /. Eg ### Private repositories #### -The "--private-repos" flag can be used to limit users to repositories starting +The`--private-repos` flag can be used to limit users to repositories starting with a path of `//`. ## Server options -Use --addr to specify which IP address and port the server should -listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. You can use port +Use `--addr` to specify which IP address and port the server should +listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to +listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -146,9 +146,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -160,18 +160,18 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. ### SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md index 5995b9785..226e29769 100644 --- a/docs/content/commands/rclone_serve_sftp.md +++ b/docs/content/commands/rclone_serve_sftp.md @@ -11,21 +11,21 @@ Serve the remote over SFTP. ## Synopsis -rclone serve sftp implements an SFTP server to serve the remote -over SFTP. This can be used with an SFTP client or you can make a -remote of type sftp to use with it. +Run a SFTP server to serve a remote over SFTP. This can be used +with an SFTP client or you can make a remote of type sftp to use with it. -You can use the filter flags (e.g. --include, --exclude) to control what +You can use the filter flags (e.g. `--include`, `--exclude`) to control what is served. -The server will log errors. Use -v to see access logs. +The server will log errors. Use `-v` to see access logs. ---bwlimit will be respected for file transfers. Use --stats to -control the stats printing. +`--bwlimit` will be respected for file transfers. +Use `--stats` to control the stats printing. -You must provide some means of authentication, either with --user/--pass, -an authorized keys file (specify location with --authorized-keys - the -default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no +You must provide some means of authentication, either with +`--user`/`--pass`, an authorized keys file (specify location with +`--authorized-keys` - the default is the same as ssh), an +`--auth-proxy`, or set the `--no-auth` flag for no authentication when logging in. Note that this also implements a small number of shell commands so @@ -33,30 +33,30 @@ that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. -If you don't supply a host --key then rclone will generate rsa, ecdsa +If you don't supply a host `--key` then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache -directory (see "rclone help flags cache-dir") in the "serve-sftp" +directory (see `rclone help flags cache-dir`) in the "serve-sftp" directory. By default the server binds to localhost:2022 - if you want it to be -reachable externally then supply "--addr :2022" for example. +reachable externally then supply `--addr :2022` for example. -Note that the default of "--vfs-cache-mode off" is fine for the rclone +Note that the default of `--vfs-cache-mode off` is fine for the rclone sftp backend, but it may not be with other SFTP clients. -If --stdio is specified, rclone will serve SFTP over stdio, which can +If `--stdio` is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example: restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... -On the client you need to set "--transfers 1" when using --stdio. +On the client you need to set `--transfers 1` when using `--stdio`. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system. The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing -used. Omitting "restrict" and using --sftp-path-override to enable +used. Omitting "restrict" and using `--sftp-path-override` to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case. @@ -79,7 +79,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -236,6 +236,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -276,7 +308,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -288,7 +320,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -305,28 +337,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -444,7 +483,7 @@ rclone serve sftp remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --stdio Run an sftp server on run stdin/stdout --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) @@ -454,6 +493,8 @@ rclone serve sftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index a240e67ab..5209719f4 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -1,23 +1,21 @@ --- title: "rclone serve webdav" -description: "Serve remote:path over webdav." +description: "Serve remote:path over WebDAV." slug: rclone_serve_webdav url: /commands/rclone_serve_webdav/ # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/webdav/ and as part of making a release run "make commanddocs" --- # rclone serve webdav -Serve remote:path over webdav. +Serve remote:path over WebDAV. ## Synopsis +Run a basic WebDAV server to serve a remote over HTTP via the +WebDAV protocol. This can be viewed with a WebDAV client, through a web +browser, or you can make a remote of type WebDAV to read and write it. -rclone serve webdav implements a basic webdav server to serve the -remote over HTTP via the webdav protocol. This can be viewed with a -webdav client, through a web browser, or you can make a remote of -type webdav to read and write it. - -## Webdav options +## WebDAV options ### --etag-hash @@ -26,38 +24,37 @@ based on the ModTime and Size of the object. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as -"MD5" or "SHA-1". - -Use "rclone hashsum" to see the full list. +"MD5" or "SHA-1". Use the [hashsum](/commands/rclone_hashsum/) command +to see the full list. ## Server options -Use --addr to specify which IP address and port the server should -listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. You can use port +Use `--addr` to specify which IP address and port the server should +listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to +listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. -If you set --addr to listen on a public or LAN accessible IP address +If you set `--addr` to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. ---server-read-timeout and --server-write-timeout can be used to +`--server-read-timeout` and `--server-write-timeout` can be used to control the timeouts on the server. Note that this is the total time for a transfer. ---max-header-bytes controls the maximum number of bytes the server will +`--max-header-bytes` controls the maximum number of bytes the server will accept in the HTTP header. ---baseurl controls the URL prefix that rclone serves from. By default -rclone will serve from the root. If you used --baseurl "/rclone" then +`--baseurl` controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used `--baseurl "/rclone"` then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically -inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", ---baseurl "/rclone" and --baseurl "/rclone/" are all treated +inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, +`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. ---template allows a user to specify a custom markup template for http -and webdav serve functions. The server exports the following markup +`--template` allows a user to specify a custom markup template for HTTP +and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: | Parameter | Description | @@ -84,9 +81,9 @@ to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or -set a single username and password with the --user and --pass flags. +set a single username and password with the `--user` and `--pass` flags. -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is +Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -98,18 +95,18 @@ To create an htpasswd file: The password file can be updated while rclone is running. -Use --realm to set the authentication realm. +Use `--realm` to set the authentication realm. ### SSL/TLS -By default this will serve over http. If you want you can serve over -https. You will need to supply the --cert and --key flags. If you -wish to do client side certificate validation then you will need to -supply --client-ca also. +By default this will serve over HTTP. If you want you can serve over +HTTPS. You will need to supply the `--cert` and `--key` flags. +If you wish to do client side certificate validation then you will need to +supply `--client-ca` also. ---cert should be either a PEM encoded certificate or a concatenation -of that with the CA certificate. --key should be the PEM encoded -private key and --client-ca should be the PEM encoded client +`--cert` should be either a PEM encoded certificate or a concatenation +of that with the CA certificate. `--key` should be the PEM encoded +private key and `--client-ca` should be the PEM encoded client certificate authority certificate. ## VFS - Virtual File System @@ -130,7 +127,7 @@ about files and directories (but not the data) in memory. Using the `--dir-cache-time` flag, you can control how long a directory should be considered up to date and not refreshed from the -backend. Changes made through the mount will appear immediately or +backend. Changes made through the VFS will appear immediately or invalidate the cache. --dir-cache-time duration Time to cache directory entries for (default 5m0s) @@ -287,6 +284,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + ## VFS Chunked Reading When rclone reads files from a remote it reads them in chunks. This @@ -327,7 +356,7 @@ read of the modification time takes a transaction. --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. - --read-only Mount read-only. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or @@ -339,7 +368,7 @@ on disk cache file. When using VFS write caching (`--vfs-cache-mode` with value writes or full), the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from cache (the related global flag `--checkers` have no effect on mount). +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). --transfers int Number of file transfers to run in parallel (default 4) @@ -356,28 +385,35 @@ It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. -The `--vfs-case-insensitive` mount flag controls how rclone handles these -two cases. If its value is "false", rclone passes file names to the mounted -file system as-is. If the flag is "true" (or appears without a value on +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. If an argument refers +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is -controlled by an underlying mounted file system. +controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) -may differ from case sensitivity of a file system mounted by rclone (the source). +may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + ## Alternate report of used bytes Some backends, most notably S3, do not report the amount of bytes used. @@ -500,7 +536,7 @@ rclone serve webdav remote:path [flags] --no-seek Don't allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication (default "rclone") --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) @@ -513,6 +549,8 @@ rclone serve webdav remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 522b371e5..a61b15e45 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -20,6 +20,10 @@ not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote. +For other algorithms, see the [hashsum](/commands/rclone_hashsum/) +command. Running `rclone sha1sum remote:path` is equivalent +to running `rclone hashsum SHA1 remote:path`. + This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index ce8e4552d..7f75fe981 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -9,6 +9,28 @@ url: /commands/rclone_size/ Prints the total size and number of objects in remote:path. +## Synopsis + + +Counts objects in the path and calculates the total size. Prints the +result to standard output. + +By default the output is in human-readable format, but shows values in +both human-readable format as well as the raw numbers (global option +`--human-readable` is not considered). Use option `--json` +to format output as JSON instead. + +Recurses by default, use `--max-depth 1` to stop the +recursion. + +Some backends do not always provide file sizes, see for example +[Google Photos](/googlephotos/#size) and +[Google Drive](/drive/#limitations-of-google-docs). +Rclone will then show a notice in the log indicating how many such +files were encountered, and count them in as empty files in the output +of the size command. + + ``` rclone size remote:path [flags] ``` diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 5596240e1..0f7da84a2 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -16,7 +16,9 @@ Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files -if necessary (except duplicate objects, see below). +if necessary (except duplicate objects, see below). If you don't +want to delete files from destination, use the +[copy](/commands/rclone_copy/) command instead. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. @@ -30,7 +32,7 @@ those providers that support it) are also not yet handled. It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See -extended explanation in the `copy` command above if unsure. +extended explanation in the [copy](/commands/rclone_copy/) command if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. diff --git a/docs/content/commands/rclone_test.md b/docs/content/commands/rclone_test.md index 2217f6879..d04ccb803 100644 --- a/docs/content/commands/rclone_test.md +++ b/docs/content/commands/rclone_test.md @@ -37,6 +37,7 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in. * [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters. * [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths. +* [rclone test makefile](/commands/rclone_test_makefile/) - Make files with random contents of the size given * [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory * [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats. diff --git a/docs/content/commands/rclone_test_makefile.md b/docs/content/commands/rclone_test_makefile.md new file mode 100644 index 000000000..5acddb5c1 --- /dev/null +++ b/docs/content/commands/rclone_test_makefile.md @@ -0,0 +1,33 @@ +--- +title: "rclone test makefile" +description: "Make files with random contents of the size given" +slug: rclone_test_makefile +url: /commands/rclone_test_makefile/ +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefile/ and as part of making a release run "make commanddocs" +--- +# rclone test makefile + +Make files with random contents of the size given + +``` +rclone test makefile []+ [flags] +``` + +## Options + +``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + -h, --help help for makefile + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 +``` + +See the [global flags page](/flags/) for global options not listed here. + +## SEE ALSO + +* [rclone test](/commands/rclone_test/) - Run a test command + diff --git a/docs/content/commands/rclone_test_makefiles.md b/docs/content/commands/rclone_test_makefiles.md index f0816d14e..ad8e3f14b 100644 --- a/docs/content/commands/rclone_test_makefiles.md +++ b/docs/content/commands/rclone_test_makefiles.md @@ -16,6 +16,8 @@ rclone test makefiles [flags] ## Options ``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern --files int Number of files to create (default 1000) --files-per-directory int Average number of files per directory (default 10) -h, --help help for makefiles @@ -23,7 +25,10 @@ rclone test makefiles [flags] --max-name-length int Maximum size of file names (default 12) --min-file-size SizeSuffix Minimum size of file to create --min-name-length int Minimum size of file names (default 4) + --pattern Fill files with a periodic pattern --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 ``` See the [global flags page](/flags/) for global options not listed here. diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md index 5de357227..1995a108f 100644 --- a/docs/content/commands/rclone_tree.md +++ b/docs/content/commands/rclone_tree.md @@ -29,12 +29,16 @@ For example 1 directories, 5 files You can use any of the filtering options with the tree command (e.g. ---include and --exclude). You can also use --fast-list. +`--include` and `--exclude`. You can also use `--fast-list`. The tree command has many options for controlling the listing which -are compatible with the tree command. Note that not all of them have +are compatible with the tree command, for example you can include file +sizes with `--size`. Note that not all of them have short options as they conflict with rclone's short options. +For a more interactive navigation of the remote see the +[ncdu](/commands/rclone_ncdu/) command. + ``` rclone tree remote:path [flags] diff --git a/docs/content/compress.md b/docs/content/compress.md index f0ff203ab..9a2d6b974 100644 --- a/docs/content/compress.md +++ b/docs/content/compress.md @@ -90,7 +90,7 @@ size of the uncompressed file. The file names should not be changed by anything {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/compress/compress.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to compress (Compress a remote). +Here are the Standard options specific to compress (Compress a remote). #### --compress-remote @@ -119,7 +119,7 @@ Properties: ### Advanced options -Here are the advanced options specific to compress (Compress a remote). +Here are the Advanced options specific to compress (Compress a remote). #### --compress-level @@ -156,4 +156,10 @@ Properties: - Type: SizeSuffix - Default: 20Mi +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](/docs/#metadata) docs for more info. + {{< rem autogenerated options stop >}} diff --git a/docs/content/crypt.md b/docs/content/crypt.md index 99b9f629c..92390bda3 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -419,7 +419,7 @@ check the checksums properly. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/crypt/crypt.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to crypt (Encrypt/Decrypt a remote). +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-remote @@ -504,7 +504,7 @@ Properties: ### Advanced options -Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-server-side-across-configs @@ -584,6 +584,12 @@ Properties: - Encode using base32768. Suitable if your remote counts UTF-16 or - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive) +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the crypt backend. @@ -594,7 +600,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command diff --git a/docs/content/drive.md b/docs/content/drive.md index 08d1bb833..a4fdf8ccb 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -548,7 +548,7 @@ Google Documents. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/drive/drive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to drive (Google Drive). +Here are the Standard options specific to drive (Google Drive). #### --drive-client-id @@ -603,22 +603,6 @@ Properties: - Allows read-only access to file metadata but - does not allow any access to read or download file content. -#### --drive-root-folder-id - -ID of the root folder. -Leave blank normally. - -Fill in to access "Computers" folders (see docs), or for rclone to use -a non root folder as its starting point. - - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -- Type: string -- Required: false - #### --drive-service-account-file Service Account Credentials JSON file path. @@ -648,7 +632,7 @@ Properties: ### Advanced options -Here are the advanced options specific to drive (Google Drive). +Here are the Advanced options specific to drive (Google Drive). #### --drive-token @@ -687,6 +671,22 @@ Properties: - Type: string - Required: false +#### --drive-root-folder-id + +ID of the root folder. +Leave blank normally. + +Fill in to access "Computers" folders (see docs), or for rclone to use +a non root folder as its starting point. + + +Properties: + +- Config: root_folder_id +- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID +- Type: string +- Required: false + #### --drive-service-account-credentials Service Account Credentials JSON blob. @@ -1167,6 +1167,34 @@ Properties: - Type: bool - Default: false +#### --drive-resource-key + +Resource key for accessing a link-shared file. + +If you need to access files shared with a link like this + + https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing + +Then you will need to use the first part "XXX" as the "root_folder_id" +and the second part "YYY" as the "resource_key" otherwise you will get +404 not found errors when trying to access the directory. + +See: https://developers.google.com/drive/api/guides/resource-keys + +This resource key requirement only applies to a subset of old files. + +Note also that opening the folder once in the web interface (with the +user you've authenticated rclone with) seems to be enough so that the +resource key is no needed. + + +Properties: + +- Config: resource_key +- Env Var: RCLONE_DRIVE_RESOURCE_KEY +- Type: string +- Required: false + #### --drive-encoding The encoding for the backend. @@ -1190,7 +1218,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -1292,7 +1320,7 @@ This will return a JSON list of objects like this With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the -drives found. +drives found and a combined drive. [My Drive] type = alias @@ -1302,10 +1330,15 @@ drives found. type = alias remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: -Adding this to the rclone config file will cause those team drives to -be accessible with the aliases shown. This may require manual editing -of the names. + [AllDrives] + type = combine + remote = "My Drive=My Drive:" "Test Drive=Test Drive:" +Adding this to the rclone config file will cause those team drives to +be accessible with the aliases shown. Any illegal charactes will be +substituted with "_" and duplicate names will have numbers suffixed. +It will also add a remote called AllDrives which shows all the shared +drives combined into one directory tree. ### untrash @@ -1362,6 +1395,18 @@ attempted if possible. Use the -i flag to see what would be copied before copying. +### exportformats + +Dump the export formats for debug purposes + + rclone backend exportformats remote: [options] [+] + +### importformats + +Dump the import formats for debug purposes + + rclone backend importformats remote: [options] [+] + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md index 690a74c0f..62b351c51 100644 --- a/docs/content/dropbox.md +++ b/docs/content/dropbox.md @@ -182,7 +182,7 @@ finishes up the last batch using this mode. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/dropbox/dropbox.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to dropbox (Dropbox). +Here are the Standard options specific to dropbox (Dropbox). #### --dropbox-client-id @@ -212,7 +212,7 @@ Properties: ### Advanced options -Here are the advanced options specific to dropbox (Dropbox). +Here are the Advanced options specific to dropbox (Dropbox). #### --dropbox-token diff --git a/docs/content/fichier.md b/docs/content/fichier.md index 38ea450d9..0e2259b69 100644 --- a/docs/content/fichier.md +++ b/docs/content/fichier.md @@ -116,7 +116,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/fichier/fichier.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to fichier (1Fichier). +Here are the Standard options specific to fichier (1Fichier). #### --fichier-api-key @@ -131,7 +131,7 @@ Properties: ### Advanced options -Here are the advanced options specific to fichier (1Fichier). +Here are the Advanced options specific to fichier (1Fichier). #### --fichier-shared-folder diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md index 66c1e4732..225573ba5 100644 --- a/docs/content/filefabric.md +++ b/docs/content/filefabric.md @@ -154,7 +154,7 @@ The ID for "S3 Storage" would be `120673761`. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/filefabric/filefabric.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to filefabric (Enterprise File Fabric). +Here are the Standard options specific to filefabric (Enterprise File Fabric). #### --filefabric-url @@ -213,7 +213,7 @@ Properties: ### Advanced options -Here are the advanced options specific to filefabric (Enterprise File Fabric). +Here are the Advanced options specific to filefabric (Enterprise File Fabric). #### --filefabric-token diff --git a/docs/content/flags.md b/docs/content/flags.md index bfc367e63..cd9801851 100644 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -38,6 +38,7 @@ These flags are available for every command. --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -86,6 +87,8 @@ These flags are available for every command. --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -157,7 +160,7 @@ These flags are available for every command. --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -212,6 +215,7 @@ and may be set in the config file. --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) + --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL @@ -251,6 +255,7 @@ and may be set in the config file. --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-upstreams SpaceSepList Upstreams for combining --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) @@ -283,6 +288,7 @@ and may be set in the config file. --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) + --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs @@ -337,8 +343,8 @@ and may be set in the config file. --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support - --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-host string FTP host to connect to @@ -357,8 +363,10 @@ and may be set in the config file. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret + --gcs-decompress If set this will decompress gzip encoded objects --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets + --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path @@ -384,10 +392,24 @@ and may be set in the config file. --hdfs-namenode string Hadoop name node and port --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name + --hidrive-auth-url string Auth server URL + --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) + --hidrive-client-id string OAuth Client Id + --hidrive-client-secret string OAuth Client Secret + --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary + --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) + --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") + --hidrive-root-prefix string The root/parent folder for all paths (default "/") + --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") + --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user") + --hidrive-token string OAuth Access Token as a JSON blob + --hidrive-token-url string Token server url + --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) + --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / - --http-url string URL of http host to connect to + --http-url string URL of HTTP host to connect to --hubic-auth-url string Auth server URL --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --hubic-client-id string OAuth Client Id @@ -396,6 +418,13 @@ and may be set in the config file. --hubic-no-chunk Don't chunk files during streaming upload --hubic-token string OAuth Access Token as a JSON blob --hubic-token-url string Token server url + --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) + --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) + --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") + --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") + --internetarchive-secret-access-key string IAS3 Secret Key (password) + --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -417,7 +446,7 @@ and may be set in the config file. --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads - --local-nounc string Disable UNC (long path names) conversion on Windows + --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) @@ -438,11 +467,11 @@ and may be set in the config file. --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) + --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret - --onedrive-disable-site-permission Disable the request for Sites.Read.All permission --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -466,9 +495,11 @@ and may be set in the config file. --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") + --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url + --pcloud-username string Your pcloud username --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID @@ -521,6 +552,7 @@ and may be set in the config file. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) + --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist @@ -531,6 +563,8 @@ and may be set in the config file. --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed + --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) + --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -543,12 +577,14 @@ and may be set in the config file. --sftp-known-hosts-file string Optional path to known_hosts file --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) - --sftp-path-override string Override path used by SSH connection + --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host + --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes + --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat @@ -605,6 +641,7 @@ and may be set in the config file. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token @@ -616,7 +653,7 @@ and may be set in the config file. --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using + --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret diff --git a/docs/content/ftp.md b/docs/content/ftp.md index f2d4a4229..617ca38d7 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -138,7 +138,7 @@ Just hit a selection number when prompted. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to ftp (FTP Connection). +Here are the Standard options specific to ftp (FTP). #### --ftp-host @@ -221,7 +221,7 @@ Properties: ### Advanced options -Here are the advanced options specific to ftp (FTP Connection). +Here are the Advanced options specific to ftp (FTP). #### --ftp-concurrency diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index 3c1b6e2bc..61b751d15 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -273,7 +273,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-client-id @@ -548,7 +548,7 @@ Properties: ### Advanced options -Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). #### --gcs-token @@ -587,6 +587,40 @@ Properties: - Type: string - Required: false +#### --gcs-no-check-bucket + +If set, don't attempt to check the bucket exists or create it. + +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. + + +Properties: + +- Config: no_check_bucket +- Env Var: RCLONE_GCS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +#### --gcs-decompress + +If set this will decompress gzip encoded objects. + +It is possible to upload objects to GCS with "Content-Encoding: gzip" +set. Normally rclone will download these files files as compressed objects. + +If this flag is set then rclone will decompress these files with +"Content-Encoding: gzip" as they are received. This means that rclone +can't check the size and hash but the file contents will be decompressed. + + +Properties: + +- Config: decompress +- Env Var: RCLONE_GCS_DECOMPRESS +- Type: bool +- Default: false + #### --gcs-encoding The encoding for the backend. diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md index 66ecabad4..da46f0156 100644 --- a/docs/content/googlephotos.md +++ b/docs/content/googlephotos.md @@ -224,7 +224,7 @@ This is similar to the Sharing tab in the Google Photos web interface. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlephotos/googlephotos.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to google photos (Google Photos). +Here are the Standard options specific to google photos (Google Photos). #### --gphotos-client-id @@ -268,7 +268,7 @@ Properties: ### Advanced options -Here are the advanced options specific to google photos (Google Photos). +Here are the Advanced options specific to google photos (Google Photos). #### --gphotos-token diff --git a/docs/content/hasher.md b/docs/content/hasher.md index 459dbe320..c92392af7 100644 --- a/docs/content/hasher.md +++ b/docs/content/hasher.md @@ -172,7 +172,7 @@ or by full re-read/re-write of the files. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hasher/hasher.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to hasher (Better checksums for other remotes). +Here are the Standard options specific to hasher (Better checksums for other remotes). #### --hasher-remote @@ -209,7 +209,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hasher (Better checksums for other remotes). +Here are the Advanced options specific to hasher (Better checksums for other remotes). #### --hasher-auto-size @@ -222,6 +222,12 @@ Properties: - Type: SizeSuffix - Default: 0 +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the hasher backend. @@ -232,7 +238,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md index 5cf77fab5..d5c1d69e5 100644 --- a/docs/content/hdfs.md +++ b/docs/content/hdfs.md @@ -151,7 +151,7 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8). {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hdfs/hdfs.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to hdfs (Hadoop distributed file system). +Here are the Standard options specific to hdfs (Hadoop distributed file system). #### --hdfs-namenode @@ -182,7 +182,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hdfs (Hadoop distributed file system). +Here are the Advanced options specific to hdfs (Hadoop distributed file system). #### --hdfs-service-principal-name diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md index 2d667a9e6..68375d1a2 100644 --- a/docs/content/hidrive.md +++ b/docs/content/hidrive.md @@ -193,7 +193,7 @@ See the below section about configuration options for more details. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hidrive/hidrive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to hidrive (HiDrive). +Here are the Standard options specific to hidrive (HiDrive). #### --hidrive-client-id @@ -239,7 +239,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hidrive (HiDrive). +Here are the Advanced options specific to hidrive (HiDrive). #### --hidrive-token @@ -346,25 +346,6 @@ Properties: - Type: bool - Default: false -#### --hidrive-disable-unicode-normalization - -Do not apply Unicode "Normalization Form C" to remote paths. - -In Unicode there are multiple valid representations for the same abstract character. -They (should) result in the same visual appearance, but are represented by different byte-sequences. -This is known as canonical equivalence. - -In HiDrive paths are always represented as byte-sequences. -This means that two paths that are canonically equivalent (and therefore look the same) are treated as two distinct paths. -As this behaviour may be undesired, by default rclone will apply unicode normalization to paths it will access. - -Properties: - -- Config: disable_unicode_normalization -- Env Var: RCLONE_HIDRIVE_DISABLE_UNICODE_NORMALIZATION -- Type: bool -- Default: false - #### --hidrive-chunk-size Chunksize for chunked uploads. diff --git a/docs/content/http.md b/docs/content/http.md index 10ef35098..d98052c59 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -126,11 +126,11 @@ or: {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to http (http Connection). +Here are the Standard options specific to http (HTTP). #### --http-url -URL of http host to connect to. +URL of HTTP host to connect to. E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password. @@ -143,7 +143,7 @@ Properties: ### Advanced options -Here are the advanced options specific to http (http Connection). +Here are the Advanced options specific to http (HTTP). #### --http-headers diff --git a/docs/content/hubic.md b/docs/content/hubic.md index 017e901d4..3000bc6af 100644 --- a/docs/content/hubic.md +++ b/docs/content/hubic.md @@ -109,7 +109,7 @@ are the same. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hubic/hubic.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to hubic (Hubic). +Here are the Standard options specific to hubic (Hubic). #### --hubic-client-id @@ -139,7 +139,7 @@ Properties: ### Advanced options -Here are the advanced options specific to hubic (Hubic). +Here are the Advanced options specific to hubic (Hubic). #### --hubic-token diff --git a/docs/content/internetarchive.md b/docs/content/internetarchive.md index 622db4d60..1bdb05962 100644 --- a/docs/content/internetarchive.md +++ b/docs/content/internetarchive.md @@ -146,7 +146,7 @@ y/e/d> y {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/internetarchive/internetarchive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to internetarchive (Internet Archive). +Here are the Standard options specific to internetarchive (Internet Archive). #### --internetarchive-access-key-id @@ -177,7 +177,7 @@ Properties: ### Advanced options -Here are the advanced options specific to internetarchive (Internet Archive). +Here are the Advanced options specific to internetarchive (Internet Archive). #### --internetarchive-endpoint @@ -246,4 +246,32 @@ Properties: - Type: MultiEncoder - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +### Metadata + +Metadata fields provided by Internet Archive. +If there are multiple values for a key, only the first one is returned. +This is a limitation of Rclone, that supports one value per one key. + +Owner is able to add custom keys. Metadata feature grabs all the keys including them. + +Here are the possible system metadata items for the internetarchive backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N | +| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N | +| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N | +| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N | +| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N | +| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N | +| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N | +| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N | +| size | File size in bytes | decimal number | 123456 | N | +| source | The source of the file | string | original | N | +| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N | + +See the [metadata](/docs/#metadata) docs for more info. + {{< rem autogenerated options stop >}} diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index ef7ea5586..7d815e8c4 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -266,7 +266,7 @@ and the current usage. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}} ### Advanced options -Here are the advanced options specific to jottacloud (Jottacloud). +Here are the Advanced options specific to jottacloud (Jottacloud). #### --jottacloud-md5-memory-limit diff --git a/docs/content/koofr.md b/docs/content/koofr.md index d1bd22976..2025d64a3 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -113,7 +113,7 @@ as they can't be used in XML strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/koofr/koofr.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). #### --koofr-provider @@ -200,7 +200,7 @@ Properties: ### Advanced options -Here are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). #### --koofr-mountid diff --git a/docs/content/local.md b/docs/content/local.md index c17a22c1a..f123aa74b 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -327,7 +327,7 @@ where it isn't supported (e.g. Windows) it will be ignored. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}} ### Advanced options -Here are the advanced options specific to local (Local Disk). +Here are the Advanced options specific to local (Local Disk). #### --local-nounc @@ -337,8 +337,8 @@ Properties: - Config: nounc - Env Var: RCLONE_LOCAL_NOUNC -- Type: string -- Required: false +- Type: bool +- Default: false - Examples: - "true" - Disables long file names. @@ -586,7 +586,6 @@ Here are the possible system metadata items for the local backend. | rdev | Device ID (if special file) | hexadecimal | 1abc | N | | uid | User ID of owner | decimal number | 500 | N | - See the [metadata](/docs/#metadata) docs for more info. ## Backend commands @@ -599,7 +598,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 04a165a5c..96bd5aabb 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -156,7 +156,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mailru/mailru.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to mailru (Mail.ru Cloud). +Here are the Standard options specific to mailru (Mail.ru Cloud). #### --mailru-user @@ -209,7 +209,7 @@ Properties: ### Advanced options -Here are the advanced options specific to mailru (Mail.ru Cloud). +Here are the Advanced options specific to mailru (Mail.ru Cloud). #### --mailru-speedup-file-patterns diff --git a/docs/content/mega.md b/docs/content/mega.md index 27882e342..cd26c7010 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -192,7 +192,7 @@ have got the remote blocked for a while. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mega/mega.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to mega (Mega). +Here are the Standard options specific to mega (Mega). #### --mega-user @@ -220,7 +220,7 @@ Properties: ### Advanced options -Here are the advanced options specific to mega (Mega). +Here are the Advanced options specific to mega (Mega). #### --mega-debug diff --git a/docs/content/netstorage.md b/docs/content/netstorage.md index 428fcc361..0c1bef5bd 100644 --- a/docs/content/netstorage.md +++ b/docs/content/netstorage.md @@ -177,7 +177,7 @@ NetStorage remote supports the purge feature by using the "quick-delete" NetStor {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/netstorage/netstorage.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to netstorage (Akamai NetStorage). +Here are the Standard options specific to netstorage (Akamai NetStorage). #### --netstorage-host @@ -220,7 +220,7 @@ Properties: ### Advanced options -Here are the advanced options specific to netstorage (Akamai NetStorage). +Here are the Advanced options specific to netstorage (Akamai NetStorage). #### --netstorage-protocol @@ -251,7 +251,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -277,10 +277,4 @@ the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. `rclone backend symlink ` -## Support - -If you have any questions or issues, please contact [Akamai Technical Support -via Control Center or by -phone](https://control.akamai.com/apps/support-ui/#/contact-support). - {{< rem autogenerated options stop >}} diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md index 36f3c89fb..b31415787 100644 --- a/docs/content/onedrive.md +++ b/docs/content/onedrive.md @@ -217,7 +217,7 @@ the OneDrive website. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/onedrive/onedrive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to onedrive (Microsoft OneDrive). +Here are the Standard options specific to onedrive (Microsoft OneDrive). #### --onedrive-client-id @@ -267,7 +267,7 @@ Properties: ### Advanced options -Here are the advanced options specific to onedrive (Microsoft OneDrive). +Here are the Advanced options specific to onedrive (Microsoft OneDrive). #### --onedrive-token @@ -359,6 +359,28 @@ Properties: - Type: string - Required: false +#### --onedrive-access-scopes + +Set scopes to be requested by rclone. + +Choose or manually enter a custom space separated list with all scopes, that rclone should request. + + +Properties: + +- Config: access_scopes +- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES +- Type: SpaceSepList +- Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access +- Examples: + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true + #### --onedrive-disable-site-permission Disable the request for Sites.Read.All permission. diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index a39072d3f..772bc051f 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -102,7 +102,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/opendrive/opendrive.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to opendrive (OpenDrive). +Here are the Standard options specific to opendrive (OpenDrive). #### --opendrive-username @@ -130,7 +130,7 @@ Properties: ### Advanced options -Here are the advanced options specific to opendrive (OpenDrive). +Here are the Advanced options specific to opendrive (OpenDrive). #### --opendrive-encoding diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md index eb11905dd..6e0c4458a 100644 --- a/docs/content/pcloud.md +++ b/docs/content/pcloud.md @@ -144,7 +144,7 @@ the `root_folder_id` in the config. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to pcloud (Pcloud). +Here are the Standard options specific to pcloud (Pcloud). #### --pcloud-client-id @@ -174,7 +174,7 @@ Properties: ### Advanced options -Here are the advanced options specific to pcloud (Pcloud). +Here are the Advanced options specific to pcloud (Pcloud). #### --pcloud-token diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md index ed71c09ea..e8039764d 100644 --- a/docs/content/premiumizeme.md +++ b/docs/content/premiumizeme.md @@ -104,7 +104,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/premiumizeme/premiumizeme.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to premiumizeme (premiumize.me). +Here are the Standard options specific to premiumizeme (premiumize.me). #### --premiumizeme-api-key @@ -122,7 +122,7 @@ Properties: ### Advanced options -Here are the advanced options specific to premiumizeme (premiumize.me). +Here are the Advanced options specific to premiumizeme (premiumize.me). #### --premiumizeme-encoding diff --git a/docs/content/putio.md b/docs/content/putio.md index f80a1f3bd..d98da528e 100644 --- a/docs/content/putio.md +++ b/docs/content/putio.md @@ -111,7 +111,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}} ### Advanced options -Here are the advanced options specific to putio (Put.io). +Here are the Advanced options specific to putio (Put.io). #### --putio-encoding diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index 2cffe23b1..971e59bc6 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -144,7 +144,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/qingstor/qingstor.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to qingstor (QingCloud Object Storage). +Here are the Standard options specific to qingstor (QingCloud Object Storage). #### --qingstor-env-auth @@ -228,7 +228,7 @@ Properties: ### Advanced options -Here are the advanced options specific to qingstor (QingCloud Object Storage). +Here are the Advanced options specific to qingstor (QingCloud Object Storage). #### --qingstor-connection-retries diff --git a/docs/content/rc.md b/docs/content/rc.md index 8ea6b77d6..fc6316e51 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -544,6 +544,7 @@ This takes the following parameters: - state - state to restart with - used with continue - result - result to restart with - used with continue + See the [config create](/commands/rclone_config_create/) command for more information on the above. **Authentication is required for this call.** @@ -595,6 +596,7 @@ This takes the following parameters: - name - name of remote - parameters - a map of \{ "key": "value" \} pairs + See the [config password](/commands/rclone_config_password/) command for more information on the above. **Authentication is required for this call.** @@ -623,6 +625,7 @@ This takes the following parameters: - state - state to restart with - used with continue - result - result to restart with - used with continue + See the [config update](/commands/rclone_config_update/) command for more information on the above. **Authentication is required for this call.** @@ -1069,7 +1072,7 @@ This takes the following parameters: The result is as returned from rclone about --json -See the [about](/commands/rclone_size/) command for more information on the above. +See the [about](/commands/rclone_about/) command for more information on the above. **Authentication is required for this call.** @@ -1101,7 +1104,7 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" - url - string, URL to read from -- autoFilename - boolean, set to true to retrieve destination file name from url + - autoFilename - boolean, set to true to retrieve destination file name from url See the [copyurl](/commands/rclone_copyurl/) command for more information on the above. @@ -1138,46 +1141,103 @@ This returns info about the remote passed in; ``` { - // optional features and whether they are available or not - "Features": { - "About": true, - "BucketBased": false, - "CanHaveEmptyDirectories": true, - "CaseInsensitive": false, - "ChangeNotify": false, - "CleanUp": false, - "Copy": false, - "DirCacheFlush": false, - "DirMove": true, - "DuplicateFiles": false, - "GetTier": false, - "ListR": false, - "MergeDirs": false, - "Move": true, - "OpenWriterAt": true, - "PublicLink": false, - "Purge": true, - "PutStream": true, - "PutUnchecked": false, - "ReadMimeType": false, - "ServerSideAcrossConfigs": false, - "SetTier": false, - "SetWrapper": false, - "UnWrap": false, - "WrapFs": false, - "WriteMimeType": false - }, - // Names of hashes available - "Hashes": [ - "MD5", - "SHA-1", - "DropboxHash", - "QuickXorHash" - ], - "Name": "local", // Name as created - "Precision": 1, // Precision of timestamps in ns - "Root": "/", // Path as created - "String": "Local file system at /" // how the remote will appear in logs + // optional features and whether they are available or not + "Features": { + "About": true, + "BucketBased": false, + "BucketBasedRootOK": false, + "CanHaveEmptyDirectories": true, + "CaseInsensitive": false, + "ChangeNotify": false, + "CleanUp": false, + "Command": true, + "Copy": false, + "DirCacheFlush": false, + "DirMove": true, + "Disconnect": false, + "DuplicateFiles": false, + "GetTier": false, + "IsLocal": true, + "ListR": false, + "MergeDirs": false, + "MetadataInfo": true, + "Move": true, + "OpenWriterAt": true, + "PublicLink": false, + "Purge": true, + "PutStream": true, + "PutUnchecked": false, + "ReadMetadata": true, + "ReadMimeType": false, + "ServerSideAcrossConfigs": false, + "SetTier": false, + "SetWrapper": false, + "Shutdown": false, + "SlowHash": true, + "SlowModTime": false, + "UnWrap": false, + "UserInfo": false, + "UserMetadata": true, + "WrapFs": false, + "WriteMetadata": true, + "WriteMimeType": false + }, + // Names of hashes available + "Hashes": [ + "md5", + "sha1", + "whirlpool", + "crc32", + "sha256", + "dropbox", + "mailru", + "quickxor" + ], + "Name": "local", // Name as created + "Precision": 1, // Precision of timestamps in ns + "Root": "/", // Path as created + "String": "Local file system at /", // how the remote will appear in logs + // Information about the system metadata for this backend + "MetadataInfo": { + "System": { + "atime": { + "Help": "Time of last access", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "btime": { + "Help": "Time of file birth (creation)", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "gid": { + "Help": "Group ID of owner", + "Type": "decimal number", + "Example": "500" + }, + "mode": { + "Help": "File type and mode", + "Type": "octal, unix style", + "Example": "0100664" + }, + "mtime": { + "Help": "Time of last modification", + "Type": "RFC 3339", + "Example": "2006-01-02T15:04:05.999999999Z07:00" + }, + "rdev": { + "Help": "Device ID (if special file)", + "Type": "hexadecimal", + "Example": "1abc" + }, + "uid": { + "Help": "User ID of owner", + "Type": "decimal number", + "Example": "500" + } + }, + "Help": "Textual help string\n" + } } ``` @@ -1200,6 +1260,7 @@ This takes the following parameters: - noMimeType - If set don't show mime types - dirsOnly - If set only show directories - filesOnly - If set only show files + - metadata - If set return metadata of objects also - hashTypes - array of strings of hash types to show if showHash set Returns: @@ -1207,7 +1268,7 @@ Returns: - list - This is an array of objects as described in the lsjson command -See the [lsjson](/commands/rclone_lsjson/) for more information on the above and examples. +See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples. **Authentication is required for this call.** @@ -1294,7 +1355,6 @@ Returns: - count - number of files - bytes - number of bytes in those files -- sizeless - number of files with unknown size, included in count but not accounted for in bytes See the [size](/commands/rclone_size/) command for more information on the above. @@ -1316,7 +1376,7 @@ The result is Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options. -See the [lsjson](/commands/rclone_lsjson/) for more information on the above and examples. +See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples. **Authentication is required for this call.** @@ -1542,6 +1602,7 @@ This takes the following parameters: - dstFs - a remote name string e.g. "drive:dst" for the destination - createEmptySrcDirs - create empty src directories on destination if set + See the [copy](/commands/rclone_copy/) command for more information on the above. **Authentication is required for this call.** @@ -1555,6 +1616,7 @@ This takes the following parameters: - createEmptySrcDirs - create empty src directories on destination if set - deleteEmptySrcDirs - delete empty src directories if set + See the [move](/commands/rclone_move/) command for more information on the above. **Authentication is required for this call.** diff --git a/docs/content/s3.md b/docs/content/s3.md index ddf9fd23d..e68b2f52a 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -13,7 +13,7 @@ The S3 backend can be used with a number of different providers: {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}} {{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}} {{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}} -{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}} +{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}} {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}} {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}} {{< provider name="Huawei OBS" home="https://www.huaweicloud.com/intl/en-us/product/obs.html" config="/s3/#huawei-obs" >}} @@ -571,7 +571,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi). #### --s3-provider @@ -592,6 +592,8 @@ Properties: - Ceph Object Storage - "ChinaMobile" - China Mobile Ecloud Elastic Object Storage (EOS) + - "Cloudflare" + - Cloudflare R2 Storage - "ArvanCloud" - Arvan Cloud Object Storage (AOS) - "DigitalOcean" @@ -828,6 +830,67 @@ Properties: - Amsterdam, The Netherlands - "fr-par" - Paris, France + - "pl-waw" + - Warsaw, Poland + +#### --s3-region + +Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint. + + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "af-south-1" + - AF-Johannesburg + - "ap-southeast-2" + - AP-Bangkok + - "ap-southeast-3" + - AP-Singapore + - "cn-east-3" + - CN East-Shanghai1 + - "cn-east-2" + - CN East-Shanghai2 + - "cn-north-1" + - CN North-Beijing1 + - "cn-north-4" + - CN North-Beijing4 + - "cn-south-1" + - CN South-Guangzhou + - "ap-southeast-1" + - CN-Hong Kong + - "sa-argentina-1" + - LA-Buenos Aires1 + - "sa-peru-1" + - LA-Lima1 + - "na-mexico-1" + - LA-Mexico City1 + - "sa-chile-1" + - LA-Santiago2 + - "sa-brazil-1" + - LA-Sao Paulo1 + - "ru-northwest-2" + - RU-Moscow2 + +#### --s3-region + +Region to connect to. + +Properties: + +- Config: region +- Env Var: RCLONE_S3_REGION +- Provider: Cloudflare +- Type: string +- Required: false +- Examples: + - "auto" + - R2 buckets are automatically distributed across Cloudflare's data centers for low latency. #### --s3-region @@ -839,7 +902,7 @@ Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: !AWS,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive +- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive - Type: string - Required: false - Examples: @@ -868,6 +931,8 @@ Properties: Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. +Properties: + - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Provider: ChinaMobile @@ -925,7 +990,7 @@ Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. - Gansu China (Lanzhou) - "eos-shanxi-1.cmecloud.cn" - Shanxi China (Taiyuan) - - eos-liaoning-1.cmecloud.cn" + - "eos-liaoning-1.cmecloud.cn" - Liaoning China (Shenyang) - "eos-hebei-1.cmecloud.cn" - Hebei China (Shijiazhuang) @@ -940,6 +1005,8 @@ Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. Endpoint for Arvan Cloud Object Storage (AOS) API. +Properties: + - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT - Provider: ArvanCloud @@ -952,50 +1019,6 @@ Endpoint for Arvan Cloud Object Storage (AOS) API. - "s3.ir-tbz-sh1.arvanstorage.com" - Tabriz Iran (Shahriar) -#### --s3-endpoint - -Endpoint for Huawei Cloud Object Storage Service (OBS) API. - -- Config: endpoint -- Env Var: RCLONE_S3_ENDPOINT -- Provider: HuaweiOBS -- Type: string -- Required: false -- Examples: - - "obs.af-south-1.myhuaweicloud.com" - - AF-Johannesburg Endpoint - - "obs.ap-southeast-2.myhuaweicloud.com" - - AP-Bangkok Endpoint - - "obs.ap-southeast-3.myhuaweicloud.com" - - AP-Singapore Endpoint - - "obs.cn-east-3.myhuaweicloud.com" - - CN East-Shanghai1 Endpoint - - "obs.cn-east-2.myhuaweicloud.com" - - CN East-Shanghai2 Endpoint - - "obs.cn-north-1.myhuaweicloud.com" - - CN North-Beijing1 Endpoint - - "obs.cn-north-4.myhuaweicloud.com" - - CN North-Beijing4 Endpoint - - "obs.cn-south-1.myhuaweicloud.com" - - CN South-Guangzhou Endpoint - - "obs.ap-southeast-1.myhuaweicloud.com" - - CN-Hong Kong Endpoint - - "obs.sa-argentina-1.myhuaweicloud.com" - - LA-Buenos Aires1 Endpoint - - "obs.sa-peru-1.myhuaweicloud.com" - - LA-Lima1 Endpoint - - "obs.na-mexico-1.myhuaweicloud.com" - - LA-Mexico City1 Endpoint - - "obs.sa-chile-1.myhuaweicloud.com" - - LA-Santiago2 Endpoint - - "obs.sa-brazil-1.myhuaweicloud.com" - - LA-Sao Paulo1 Endpoint - - "obs.ru-northwest-2.myhuaweicloud.com" - - RU-Moscow2 Endpoint - - - - #### --s3-endpoint Endpoint for IBM COS S3 API. @@ -1200,6 +1223,49 @@ Properties: #### --s3-endpoint +Endpoint for OBS API. + +Properties: + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Provider: HuaweiOBS +- Type: string +- Required: false +- Examples: + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + +#### --s3-endpoint + Endpoint for Scaleway Object Storage. Properties: @@ -1214,6 +1280,8 @@ Properties: - Amsterdam Endpoint - "s3.fr-par.scw.cloud" - Paris Endpoint + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint #### --s3-endpoint @@ -1365,7 +1433,7 @@ Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: !AWS,IBMCOS,IDrive,TencentCOS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,HuaweiOBS +- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp - Type: string - Required: false - Examples: @@ -1465,6 +1533,100 @@ Properties: #### --s3-location-constraint +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ChinaMobile +- Type: string +- Required: false +- Examples: + - "wuxi1" + - East China (Suzhou) + - "jinan1" + - East China (Jinan) + - "ningbo1" + - East China (Hangzhou) + - "shanghai1" + - East China (Shanghai-1) + - "zhengzhou1" + - Central China (Zhengzhou) + - "hunan1" + - Central China (Changsha-1) + - "zhuzhou1" + - Central China (Changsha-2) + - "guangzhou1" + - South China (Guangzhou-2) + - "dongguan1" + - South China (Guangzhou-3) + - "beijing1" + - North China (Beijing-1) + - "beijing2" + - North China (Beijing-2) + - "beijing4" + - North China (Beijing-3) + - "huhehaote1" + - North China (Huhehaote) + - "chengdu1" + - Southwest China (Chengdu) + - "chongqing1" + - Southwest China (Chongqing) + - "guiyang1" + - Southwest China (Guiyang) + - "xian1" + - Nouthwest China (Xian) + - "yunnan" + - Yunnan China (Kunming) + - "yunnan2" + - Yunnan China (Kunming-2) + - "tianjin1" + - Tianjin China (Tianjin) + - "jilin1" + - Jilin China (Changchun) + - "hubei1" + - Hubei China (Xiangyan) + - "jiangxi1" + - Jiangxi China (Nanchang) + - "gansu1" + - Gansu China (Lanzhou) + - "shanxi1" + - Shanxi China (Taiyuan) + - "liaoning1" + - Liaoning China (Shenyang) + - "hebei1" + - Hebei China (Shijiazhuang) + - "fujian1" + - Fujian China (Xiamen) + - "guangxi1" + - Guangxi China (Nanning) + - "anhui1" + - Anhui China (Huainan) + +#### --s3-location-constraint + +Location constraint - must match endpoint. + +Used when creating buckets only. + +Properties: + +- Config: location_constraint +- Env Var: RCLONE_S3_LOCATION_CONSTRAINT +- Provider: ArvanCloud +- Type: string +- Required: false +- Examples: + - "ir-thr-at1" + - Tehran Iran (Asiatech) + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + +#### --s3-location-constraint + Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter. @@ -1604,7 +1766,7 @@ Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: !AWS,IBMCOS,IDrive,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS,HuaweiOBS +- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS - Type: string - Required: false @@ -1623,7 +1785,7 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj +- Provider: !Storj,Cloudflare - Type: string - Required: false - Examples: @@ -1676,7 +1838,7 @@ Properties: - Config: server_side_encryption - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION -- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -1760,6 +1922,8 @@ Properties: The storage class to use when storing new objects in ChinaMobile. +Properties: + - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Provider: ChinaMobile @@ -1779,6 +1943,8 @@ The storage class to use when storing new objects in ChinaMobile. The storage class to use when storing new objects in ArvanCloud. +Properties: + - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS - Provider: ArvanCloud @@ -1832,7 +1998,7 @@ Properties: ### Advanced options -Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi). #### --s3-bucket-acl @@ -1884,7 +2050,7 @@ Properties: - Config: sse_customer_algorithm - Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM -- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -1901,7 +2067,7 @@ Properties: - Config: sse_customer_key - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY -- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -1919,7 +2085,7 @@ Properties: - Config: sse_customer_key_md5 - Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 -- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio +- Provider: AWS,Ceph,ChinaMobile,Minio - Type: string - Required: false - Examples: @@ -1964,6 +2130,13 @@ most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size. +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with "-P" flag. Rclone treats chunk as sent when +it's buffered by the AWS SDK, when in fact it may still be uploading. +A bigger chunk size means a bigger AWS SDK buffer and progress +reporting more deviating from the truth. + + Properties: - Config: chunk_size @@ -2369,6 +2542,26 @@ Properties: - Type: Tristate - Default: unset +#### --s3-use-presigned-request + +Whether to use a presigned request or PutObject for single part uploads + +If this is false rclone will use PutObject from the AWS SDK to upload +an object. + +Versions of rclone < 1.59 use presigned requests to upload a single +part object and setting this flag to true will re-enable that +functionality. This shouldn't be necessary except in exceptional +circumstances or for testing. + + +Properties: + +- Config: use_presigned_request +- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST +- Type: bool +- Default: false + ### Metadata User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case. @@ -2386,7 +2579,6 @@ Here are the possible system metadata items for the s3 backend. | mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N | | tier | Tier of the object | string | GLACIER | **Y** | - See the [metadata](/docs/#metadata) docs for more info. ## Backend commands @@ -2399,7 +2591,7 @@ Run them with The help below will explain what arguments each command takes. -See [the "rclone backend" command](/commands/rclone_backend/) for more +See the [backend](/commands/rclone_backend/) command for more info on how to pass options and arguments. These can be run on a running backend using the rc command @@ -3991,7 +4183,7 @@ d) Delete this remote y/e/d> y ``` -### ArvanCloud +### ArvanCloud {#arvan-cloud} [ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. It gives you access to backup and archived files and allows sharing. diff --git a/docs/content/seafile.md b/docs/content/seafile.md index 407de6f41..132a405a1 100644 --- a/docs/content/seafile.md +++ b/docs/content/seafile.md @@ -266,7 +266,7 @@ Versions between 6.0 and 6.3 haven't been tested and might not work properly. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/seafile/seafile.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to seafile (seafile). +Here are the Standard options specific to seafile (seafile). #### --seafile-url @@ -358,7 +358,7 @@ Properties: ### Advanced options -Here are the advanced options specific to seafile (seafile). +Here are the Advanced options specific to seafile (seafile). #### --seafile-create-library diff --git a/docs/content/sftp.md b/docs/content/sftp.md index ed5eec9de..8f3066a93 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -388,7 +388,7 @@ with a Windows OpenSSH server, rclone will use a built-in shell command {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sftp/sftp.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to sftp (SSH/SFTP Connection). +Here are the Standard options specific to sftp (SSH/SFTP). #### --sftp-host @@ -514,7 +514,7 @@ Properties: #### --sftp-use-insecure-cipher -Enable the use of insecure ciphers and key exchange methods. +Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: @@ -554,7 +554,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sftp (SSH/SFTP Connection). +Here are the Advanced options specific to sftp (SSH/SFTP). #### --sftp-known-hosts-file @@ -592,16 +592,16 @@ Properties: #### --sftp-path-override -Override path used by SSH connection. +Override path used by SSH shell commands. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. -Shared folders can be found in directories representing volumes +E.g. if shared folders can be found in directories representing volumes: rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory -Home directory can be found in a shared folder called "home" +E.g. if home directory can be found in a shared folder called "home": rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory @@ -623,6 +623,28 @@ Properties: - Type: bool - Default: true +#### --sftp-shell-type + +The type of SSH shell on remote server, if any. + +Leave blank for autodetect. + +Properties: + +- Config: shell_type +- Env Var: RCLONE_SFTP_SHELL_TYPE +- Type: string +- Required: false +- Examples: + - "none" + - No shell access + - "unix" + - Unix shell + - "powershell" + - PowerShell + - "cmd" + - Windows Command Prompt + #### --sftp-md5sum-command The command used to read md5 hashes. @@ -763,6 +785,75 @@ Properties: - Type: Duration - Default: 1m0s +#### --sftp-chunk-size + +Upload and download chunk size. + +This controls the maximum packet size used in the SFTP protocol. The +RFC limits this to 32768 bytes (32k), however a lot of servers +support larger sizes and setting it larger will increase transfer +speed dramatically on high latency links. + +Only use a setting higher than 32k if you always connect to the same +server or after sufficiently broad testing. + +For example using the value of 252k with OpenSSH works well with its +maximum packet size of 256k. + +If you get the error "failed to send packet header: EOF" when copying +a large file, try lowering this number. + + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_SFTP_CHUNK_SIZE +- Type: SizeSuffix +- Default: 32Ki + +#### --sftp-concurrency + +The maximum number of outstanding requests for one file + +This controls the maximum number of outstanding requests for one file. +Increasing it will increase throughput on high latency links at the +cost of using more memory. + + +Properties: + +- Config: concurrency +- Env Var: RCLONE_SFTP_CONCURRENCY +- Type: int +- Default: 64 + +#### --sftp-set-env + +Environment variables to pass to sftp and commands + +Set environment variables in the form: + + VAR=value + +to be passed to the sftp client and to any commands run (eg md5sum). + +Pass multiple variables space separated, eg + + VAR1=value VAR2=value + +and pass variables with spaces in in quotes, eg + + "VAR3=value with space" "VAR4=value with space" VAR5=nospacehere + + + +Properties: + +- Config: set_env +- Env Var: RCLONE_SFTP_SET_ENV +- Type: SpaceSepList +- Default: + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index c33651b90..19cdc9397 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -150,7 +150,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sharefile/sharefile.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to sharefile (Citrix Sharefile). +Here are the Standard options specific to sharefile (Citrix Sharefile). #### --sharefile-root-folder-id @@ -179,7 +179,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sharefile (Citrix Sharefile). +Here are the Advanced options specific to sharefile (Citrix Sharefile). #### --sharefile-upload-cutoff diff --git a/docs/content/sia.md b/docs/content/sia.md index 0d3d92031..9cdb24b25 100644 --- a/docs/content/sia.md +++ b/docs/content/sia.md @@ -132,7 +132,7 @@ rclone copy /home/source mySia:backup {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sia/sia.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to sia (Sia Decentralized Cloud). +Here are the Standard options specific to sia (Sia Decentralized Cloud). #### --sia-api-url @@ -165,7 +165,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sia (Sia Decentralized Cloud). +Here are the Advanced options specific to sia (Sia Decentralized Cloud). #### --sia-user-agent diff --git a/docs/content/storj.md b/docs/content/storj.md index 836718e38..6a8e41623 100644 --- a/docs/content/storj.md +++ b/docs/content/storj.md @@ -215,7 +215,7 @@ y/e/d> y {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/storj/storj.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to storj (Storj Decentralized Cloud Storage). +Here are the Standard options specific to storj (Storj Decentralized Cloud Storage). #### --storj-provider diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md index 8e6655723..58c163b32 100644 --- a/docs/content/sugarsync.md +++ b/docs/content/sugarsync.md @@ -123,7 +123,7 @@ deleted straight away. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sugarsync/sugarsync.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to sugarsync (Sugarsync). +Here are the Standard options specific to sugarsync (Sugarsync). #### --sugarsync-app-id @@ -178,7 +178,7 @@ Properties: ### Advanced options -Here are the advanced options specific to sugarsync (Sugarsync). +Here are the Advanced options specific to sugarsync (Sugarsync). #### --sugarsync-refresh-token diff --git a/docs/content/swift.md b/docs/content/swift.md index 870179ca0..70cde652f 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -245,7 +245,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-env-auth @@ -485,7 +485,7 @@ Properties: ### Advanced options -Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). #### --swift-leave-parts-on-error diff --git a/docs/content/union.md b/docs/content/union.md index be0f81007..d7f37fa71 100644 --- a/docs/content/union.md +++ b/docs/content/union.md @@ -174,7 +174,7 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/union/union.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to union (Union merges the contents of several upstream fs). +Here are the Standard options specific to union (Union merges the contents of several upstream fs). #### --union-upstreams @@ -235,4 +235,28 @@ Properties: - Type: int - Default: 120 +### Advanced options + +Here are the Advanced options specific to union (Union merges the contents of several upstream fs). + +#### --union-min-free-space + +Minimum viable free space for lfs/eplfs policies. + +If a remote has less than this much free space then it won't be +considered for use in lfs or eplfs policies. + +Properties: + +- Config: min_free_space +- Env Var: RCLONE_UNION_MIN_FREE_SPACE +- Type: SizeSuffix +- Default: 1Gi + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](/docs/#metadata) docs for more info. + {{< rem autogenerated options stop >}} diff --git a/docs/content/uptobox.md b/docs/content/uptobox.md index 626c96ffd..83f63916b 100644 --- a/docs/content/uptobox.md +++ b/docs/content/uptobox.md @@ -101,7 +101,7 @@ as they can't be used in XML strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to uptobox (Uptobox). +Here are the Standard options specific to uptobox (Uptobox). #### --uptobox-access-token @@ -118,7 +118,7 @@ Properties: ### Advanced options -Here are the advanced options specific to uptobox (Uptobox). +Here are the Advanced options specific to uptobox (Uptobox). #### --uptobox-encoding diff --git a/docs/content/webdav.md b/docs/content/webdav.md index 766a5cc8c..9780941eb 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -110,7 +110,7 @@ with them. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/webdav/webdav.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to webdav (Webdav). +Here are the Standard options specific to webdav (WebDAV). #### --webdav-url @@ -127,7 +127,7 @@ Properties: #### --webdav-vendor -Name of the Webdav site/service/software you are using. +Name of the WebDAV site/service/software you are using. Properties: @@ -186,7 +186,7 @@ Properties: ### Advanced options -Here are the advanced options specific to webdav (Webdav). +Here are the Advanced options specific to webdav (WebDAV). #### --webdav-bearer-token-command diff --git a/docs/content/yandex.md b/docs/content/yandex.md index d61093ff2..37176a40a 100644 --- a/docs/content/yandex.md +++ b/docs/content/yandex.md @@ -116,7 +116,7 @@ as they can't be used in JSON strings. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to yandex (Yandex Disk). +Here are the Standard options specific to yandex (Yandex Disk). #### --yandex-client-id @@ -146,7 +146,7 @@ Properties: ### Advanced options -Here are the advanced options specific to yandex (Yandex Disk). +Here are the Advanced options specific to yandex (Yandex Disk). #### --yandex-token diff --git a/docs/content/zoho.md b/docs/content/zoho.md index ba155dfed..187eb5ff1 100644 --- a/docs/content/zoho.md +++ b/docs/content/zoho.md @@ -127,7 +127,7 @@ from filenames during upload. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/zoho/zoho.go then run make backenddocs" >}} ### Standard options -Here are the standard options specific to zoho (Zoho). +Here are the Standard options specific to zoho (Zoho). #### --zoho-client-id @@ -176,12 +176,16 @@ Properties: - Europe - "in" - India + - "jp" + - Japan + - "com.cn" + - China - "com.au" - Australia ### Advanced options -Here are the advanced options specific to zoho (Zoho). +Here are the Advanced options specific to zoho (Zoho). #### --zoho-token diff --git a/rclone.1 b/rclone.1 index 693b003af..6b3c7fd37 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,7 +1,7 @@ .\"t .\" Automatically generated by Pandoc 2.9.2.1 .\" -.TH "rclone" "1" "Mar 18, 2022" "User Manual" "" +.TH "rclone" "1" "Jul 09, 2022" "User Manual" "" .hy .SH Rclone syncs your files to cloud storage .PP @@ -133,7 +133,7 @@ a network disk .IP \[bu] 2 Serve (https://rclone.org/commands/rclone_serve/) local or remote files over -HTTP (https://rclone.org/commands/rclone_serve_http/)/WebDav (https://rclone.org/commands/rclone_serve_webdav/)/FTP (https://rclone.org/commands/rclone_serve_ftp/)/SFTP (https://rclone.org/commands/rclone_serve_sftp/)/dlna (https://rclone.org/commands/rclone_serve_dlna/) +HTTP (https://rclone.org/commands/rclone_serve_http/)/WebDav (https://rclone.org/commands/rclone_serve_webdav/)/FTP (https://rclone.org/commands/rclone_serve_ftp/)/SFTP (https://rclone.org/commands/rclone_serve_sftp/)/DLNA (https://rclone.org/commands/rclone_serve_dlna/) .IP \[bu] 2 Experimental Web based GUI (https://rclone.org/gui/) .SS Supported providers @@ -157,10 +157,16 @@ Box .IP \[bu] 2 Ceph .IP \[bu] 2 +China Mobile Ecloud Elastic Object Storage (EOS) +.IP \[bu] 2 +Arvan Cloud Object Storage (AOS) +.IP \[bu] 2 Citrix ShareFile .IP \[bu] 2 C14 .IP \[bu] 2 +Cloudflare R2 +.IP \[bu] 2 DigitalOcean Spaces .IP \[bu] 2 Digi Storage @@ -181,14 +187,22 @@ Google Photos .IP \[bu] 2 HDFS .IP \[bu] 2 +Hetzner Storage Box +.IP \[bu] 2 +HiDrive +.IP \[bu] 2 HTTP .IP \[bu] 2 Hubic .IP \[bu] 2 +Internet Archive +.IP \[bu] 2 Jottacloud .IP \[bu] 2 IBM COS S3 .IP \[bu] 2 +IDrive e2 +.IP \[bu] 2 Koofr .IP \[bu] 2 Mail.ru Cloud @@ -260,8 +274,26 @@ Yandex Disk Zoho WorkDrive .IP \[bu] 2 The local filesystem +.SS Virtual providers .PP -Links +These backends adapt or modify other storage providers: +.IP \[bu] 2 +Alias: Rename existing remotes +.IP \[bu] 2 +Cache: Cache remotes (DEPRECATED) +.IP \[bu] 2 +Chunker: Split large files +.IP \[bu] 2 +Combine: Combine multiple remotes into a directory tree +.IP \[bu] 2 +Compress: Compress files +.IP \[bu] 2 +Crypt: Encrypt files +.IP \[bu] 2 +Hasher: Hash files +.IP \[bu] 2 +Union: Join multiple remotes to work together +.SS Links .IP \[bu] 2 Home page (https://rclone.org/) .IP \[bu] 2 @@ -300,7 +332,7 @@ To install rclone on Linux/macOS/BSD systems, run: .IP .nf \f[C] -curl https://rclone.org/install.sh | sudo bash +sudo -v ; curl https://rclone.org/install.sh | sudo bash \f[R] .fi .PP @@ -308,7 +340,7 @@ For beta installation, run: .IP .nf \f[C] -curl https://rclone.org/install.sh | sudo bash -s beta +sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta \f[R] .fi .PP @@ -425,7 +457,7 @@ pop-up will appear saying: .IP .nf \f[C] -\[lq]rclone\[rq] cannot be opened because the developer cannot be verified. +\[dq]rclone\[dq] cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. \f[R] .fi @@ -538,48 +570,114 @@ kill %1 .fi .SS Install from source .PP -Make sure you have at least Go (https://golang.org/) go1.15 installed. -Download go (https://golang.org/dl/) if necessary. -The latest release is recommended. -Then +Make sure you have git and Go (https://golang.org/) installed. +Go version 1.16 or newer is required, latest release is recommended. +You can get it from your package manager, or download it from +golang.org/dl (https://golang.org/dl/). +Then you can run the following: .IP .nf \f[C] git clone https://github.com/rclone/rclone.git cd rclone go build -# If on macOS and mount is wanted, instead run: make GOTAGS=cmount -\&./rclone version \f[R] .fi .PP -This will leave you a checked out version of rclone you can modify and -send pull requests with. -If you use \f[C]make\f[R] instead of \f[C]go build\f[R] then the rclone -build will have the correct version information in it. +This will check out the rclone source in subfolder rclone, which you can +later modify and send pull requests with. +Then it will build the rclone executable in the same folder. +As an initial check you can now run \f[C]./rclone version\f[R] +(\f[C].\[rs]rclone version\f[R] on Windows). .PP -You can also build the latest stable rclone with: +Note that on macOS and Windows the +mount (https://rclone.org/commands/rclone_mount/) command will not be +available unless you specify additional build tag \f[C]cmount\f[R]. +.IP +.nf +\f[C] +go build -tags cmount +\f[R] +.fi +.PP +This assumes you have a GCC compatible C compiler (GCC or Clang) in your +PATH, as it uses cgo (https://pkg.go.dev/cmd/cgo). +But on Windows, the cgofuse (https://github.com/winfsp/cgofuse) library +that the cmount implementation is based on, also supports building +without cgo (https://github.com/golang/go/wiki/WindowsDLLs), i.e. +by setting environment variable CGO_ENABLED to value 0 (static linking). +This is how the official Windows release of rclone is being built, +starting with version 1.59. +It is still possible to build with cgo on Windows as well, by using the +MinGW port of GCC, e.g. +by installing it in a MSYS2 (https://www.msys2.org) distribution (make +sure you install it in the classic mingw64 subsystem, the ucrt64 version +is not compatible). +.PP +Additionally, on Windows, you must install the third party utility +WinFsp (http://www.secfs.net/winfsp/), with the \[dq]Developer\[dq] +feature selected. +If building with cgo, you must also set environment variable CPATH +pointing to the fuse include directory within the WinFsp installation +(normally +\f[C]C:\[rs]Program Files (x86)\[rs]WinFsp\[rs]inc\[rs]fuse\f[R]). +.PP +You may also add arguments \f[C]-ldflags -s\f[R] (with or without +\f[C]-tags cmount\f[R]), to omit symbol table and debug information, +making the executable file smaller, and \f[C]-trimpath\f[R] to remove +references to local file system paths. +This is how the official rclone releases are built. +.IP +.nf +\f[C] +go build -trimpath -ldflags -s -tags cmount +\f[R] +.fi +.PP +Instead of executing the \f[C]go build\f[R] command directly, you can +run it via the Makefile, which also sets version information and copies +the resulting rclone executable into your GOPATH bin folder +(\f[C]$(go env GOPATH)/bin\f[R], which corresponds to +\f[C]\[ti]/go/bin/rclone\f[R] by default). +.IP +.nf +\f[C] +make +\f[R] +.fi +.PP +To include mount command on macOS and Windows with Makefile build: +.IP +.nf +\f[C] +make GOTAGS=cmount +\f[R] +.fi +.PP +As an alternative you can download the source, build and install rclone +in one operation, as a regular Go package. +The source will be stored it in the Go module cache, and the resulting +executable will be in your GOPATH bin folder +(\f[C]$(go env GOPATH)/bin\f[R], which corresponds to +\f[C]\[ti]/go/bin/rclone\f[R] by default). +.PP +With Go version 1.17 or newer: +.IP +.nf +\f[C] +go install github.com/rclone/rclone\[at]latest +\f[R] +.fi +.PP +With Go versions older than 1.17 (do \f[B]not\f[R] use the \f[C]-u\f[R] +flag, it causes Go to try to update the dependencies that rclone uses +and sometimes these don\[aq]t work with the current version): .IP .nf \f[C] go get github.com/rclone/rclone \f[R] .fi -.PP -or the latest version (equivalent to the beta) with -.IP -.nf -\f[C] -go get github.com/rclone/rclone\[at]master -\f[R] -.fi -.PP -These will build the binary in \f[C]$(go env GOPATH)/bin\f[R] -(\f[C]\[ti]/go/bin/rclone\f[R] by default) after downloading the source -to the go module cache. -Note - do \f[B]not\f[R] use the \f[C]-u\f[R] flag here. -This causes go to try to update the dependencies that rclone uses and -sometimes these don\[aq]t work with the current version of rclone. .SS Installation with Ansible .PP This can be done with Stefan Weichinger\[aq]s ansible @@ -730,7 +828,7 @@ that executes your rclone command, as an alternative to scheduled task configured to run at startup. .SS Mount command built-in service integration .PP -For mount commands, Rclone has a built-in Windows service integration +For mount commands, rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command \f[C]New-Service\f[R] (requires @@ -844,6 +942,8 @@ Citrix ShareFile (https://rclone.org/sharefile/) .IP \[bu] 2 Compress (https://rclone.org/compress/) .IP \[bu] 2 +Combine (https://rclone.org/combine/) +.IP \[bu] 2 Crypt (https://rclone.org/crypt/) - to encrypt other remotes .IP \[bu] 2 DigitalOcean Spaces (https://rclone.org/s3/#digitalocean-spaces) @@ -867,10 +967,14 @@ remotes .IP \[bu] 2 HDFS (https://rclone.org/hdfs/) .IP \[bu] 2 +HiDrive (https://rclone.org/hidrive/) +.IP \[bu] 2 HTTP (https://rclone.org/http/) .IP \[bu] 2 Hubic (https://rclone.org/hubic/) .IP \[bu] 2 +Internet Archive (https://rclone.org/internetarchive/) +.IP \[bu] 2 Jottacloud (https://rclone.org/jottacloud/) .IP \[bu] 2 Koofr (https://rclone.org/koofr/) @@ -1032,10 +1136,17 @@ Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn\[aq]t delete files from the destination. +If you want to also delete files from destination, to make it match +source, use the sync (https://rclone.org/commands/rclone_sync/) command +instead. .PP Note that it is always the contents of the directory that is synced, not -the directory so when source:path is a directory, it\[aq]s the contents -of source:path that are copied, not the directory name and contents. +the directory itself. +So when source:path is a directory, it\[aq]s the contents of source:path +that are copied, not the directory name and contents. +.PP +To copy single files, use the +copyto (https://rclone.org/commands/rclone_copyto/) command instead. .PP If dest:path doesn\[aq]t exist, it is created and the source:path contents go there. @@ -1133,6 +1244,8 @@ Doesn\[aq]t transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). +If you don\[aq]t want to delete files from destination, use the +copy (https://rclone.org/commands/rclone_copy/) command instead. .PP \f[B]Important\f[R]: Since this can cause data loss, test first with the \f[C]--dry-run\f[R] or the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag. @@ -1149,9 +1262,11 @@ Duplicate objects (files with the same name, on those providers that support it) are also not yet handled. .PP It is always the contents of the directory that is synced, not the -directory so when source:path is a directory, it\[aq]s the contents of -source:path that are copied, not the directory name and contents. -See extended explanation in the \f[C]copy\f[R] command above if unsure. +directory itself. +So when source:path is a directory, it\[aq]s the contents of source:path +that are copied, not the directory name and contents. +See extended explanation in the +copy (https://rclone.org/commands/rclone_copy/) command if unsure. .PP If dest:path doesn\[aq]t exist, it is created and the source:path contents go there. @@ -1195,6 +1310,9 @@ Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation. .PP +To move single files, use the +moveto (https://rclone.org/commands/rclone_moveto/) command instead. +.PP If no filters are in use and if possible this will server-side move \f[C]source:path\f[R] into \f[C]dest:path\f[R]. After this \f[C]source:path\f[R] will no longer exist. @@ -1206,7 +1324,7 @@ If possible a server-side move will be used, otherwise it will copy it original (if no errors on copy) in \f[C]source:path\f[R]. .PP If you want to delete empty source directories after move, use the ---delete-empty-src-dirs flag. +\f[C]--delete-empty-src-dirs\f[R] flag. .PP See the --no-traverse (https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. @@ -1246,18 +1364,20 @@ Remove the files in path. .SS Synopsis .PP Remove the files in path. -Unlike \f[C]purge\f[R] it obeys include/exclude filters so can be used -to selectively delete files. +Unlike purge (https://rclone.org/commands/rclone_purge/) it obeys +include/exclude filters so can be used to selectively delete files. .PP \f[C]rclone delete\f[R] only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the -\f[C]purge\f[R] command. +purge (https://rclone.org/commands/rclone_purge/) command. .PP If you supply the \f[C]--rmdirs\f[R] flag, it will remove all empty directories along with it. -You can also use the separate command \f[C]rmdir\f[R] or -\f[C]rmdirs\f[R] to delete empty directories only. +You can also use the separate command +rmdir (https://rclone.org/commands/rclone_rmdir/) or +rmdirs (https://rclone.org/commands/rclone_rmdirs/) to delete empty +directories only. .PP For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either): @@ -1311,10 +1431,11 @@ Remove the path and all of its contents. Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. -Use the \f[C]delete\f[R] command if you want to selectively delete -files. -To delete empty directories only, use command \f[C]rmdir\f[R] or -\f[C]rmdirs\f[R]. +Use the delete (https://rclone.org/commands/rclone_delete/) command if +you want to selectively delete files. +To delete empty directories only, use command +rmdir (https://rclone.org/commands/rclone_rmdir/) or +rmdirs (https://rclone.org/commands/rclone_rmdirs/). .PP \f[B]Important\f[R]: Since this can cause data loss, test first with the \f[C]--dry-run\f[R] or the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag. @@ -1369,10 +1490,12 @@ Remove the empty directory at path. This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. -Use command \f[C]rmdirs\f[R] (or \f[C]delete\f[R] with option +Use command rmdirs (https://rclone.org/commands/rclone_rmdirs/) (or +delete (https://rclone.org/commands/rclone_delete/) with option \f[C]--rmdirs\f[R]) to do that. .PP -To delete a path and any objects in it, use \f[C]purge\f[R] command. +To delete a path and any objects in it, use +purge (https://rclone.org/commands/rclone_purge/) command. .IP .nf \f[C] @@ -1403,6 +1526,10 @@ It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don\[aq]t match. It doesn\[aq]t alter the source or destination. .PP +For the crypt (https://rclone.org/crypt/) remote there is a dedicated +command, cryptcheck (https://rclone.org/commands/rclone_cryptcheck/), +that are able to check the checksums of the crypted files. +.PP If you supply the \f[C]--size-only\f[R] flag, it will only compare the sizes not the hashes as well. Use this for a quick check. @@ -1554,7 +1681,7 @@ List all directories/containers/buckets in the path. .PP Lists the directories in the source path to standard output. Does not recurse by default. -Use the -R flag to recurse. +Use the \f[C]-R\f[R] flag to recurse. .PP This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the @@ -1580,8 +1707,8 @@ $ rclone lsd drive:test \f[R] .fi .PP -If you just want the directory names use \[dq]rclone lsf ---dirs-only\[dq]. +If you just want the directory names use +\f[C]rclone lsf --dirs-only\f[R]. .PP Any of the filtering options can be applied to this command. .PP @@ -1714,6 +1841,11 @@ If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote. .PP +For other algorithms, see the +hashsum (https://rclone.org/commands/rclone_hashsum/) command. +Running \f[C]rclone md5sum remote:path\f[R] is equivalent to running +\f[C]rclone hashsum MD5 remote:path\f[R]. +.PP This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -1755,6 +1887,11 @@ If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote. .PP +For other algorithms, see the +hashsum (https://rclone.org/commands/rclone_hashsum/) command. +Running \f[C]rclone sha1sum remote:path\f[R] is equivalent to running +\f[C]rclone hashsum SHA1 remote:path\f[R]. +.PP This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -1789,6 +1926,24 @@ commands, flags and backends. .SH rclone size .PP Prints the total size and number of objects in remote:path. +.SS Synopsis +.PP +Counts objects in the path and calculates the total size. +Prints the result to standard output. +.PP +By default the output is in human-readable format, but shows values in +both human-readable format as well as the raw numbers (global option +\f[C]--human-readable\f[R] is not considered). +Use option \f[C]--json\f[R] to format output as JSON instead. +.PP +Recurses by default, use \f[C]--max-depth 1\f[R] to stop the recursion. +.PP +Some backends do not always provide file sizes, see for example Google +Photos (https://rclone.org/googlephotos/#size) and Google +Drive (https://rclone.org/drive/#limitations-of-google-docs). +Rclone will then show a notice in the log indicating how many such files +were encountered, and count them in as empty files in the output of the +size command. .IP .nf \f[C] @@ -1926,9 +2081,9 @@ Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names. .PP -However if --by-hash is passed in then dedupe will find files with -duplicate hashes instead which will work on any backend which supports -at least one hash. +However if \f[C]--by-hash\f[R] is passed in then dedupe will find files +with duplicate hashes instead which will work on any backend which +supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash. .PP @@ -2473,7 +2628,7 @@ rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. .SH rclone completion .PP -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell .SS Synopsis .PP Generate the autocompletion script for rclone for the specified shell. @@ -2495,23 +2650,23 @@ rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. .IP \[bu] 2 rclone completion -bash (https://rclone.org/commands/rclone_completion_bash/) - generate +bash (https://rclone.org/commands/rclone_completion_bash/) - Generate the autocompletion script for bash .IP \[bu] 2 rclone completion -fish (https://rclone.org/commands/rclone_completion_fish/) - generate +fish (https://rclone.org/commands/rclone_completion_fish/) - Generate the autocompletion script for fish .IP \[bu] 2 rclone completion powershell (https://rclone.org/commands/rclone_completion_powershell/) - -generate the autocompletion script for powershell +Generate the autocompletion script for powershell .IP \[bu] 2 rclone completion -zsh (https://rclone.org/commands/rclone_completion_zsh/) - generate the +zsh (https://rclone.org/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh .SH rclone completion bash .PP -generate the autocompletion script for bash +Generate the autocompletion script for bash .SS Synopsis .PP Generate the autocompletion script for the bash shell. @@ -2520,12 +2675,29 @@ This script depends on the \[aq]bash-completion\[aq] package. If it is not installed already, you can install it via your OS\[aq]s package manager. .PP -To load completions in your current shell session: $ source <(rclone -completion bash) +To load completions in your current shell session: +.IP +.nf +\f[C] +source <(rclone completion bash) +\f[R] +.fi .PP -To load completions for every new session, execute once: Linux: $ rclone -completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone -completion bash > /usr/local/etc/bash_completion.d/rclone +To load completions for every new session, execute once: +.SS Linux: +.IP +.nf +\f[C] +rclone completion bash > /etc/bash_completion.d/rclone +\f[R] +.fi +.SS macOS: +.IP +.nf +\f[C] +rclone completion bash > /usr/local/etc/bash_completion.d/rclone +\f[R] +.fi .PP You will need to start a new shell for this setup to take effect. .IP @@ -2548,19 +2720,29 @@ not listed here. .SS SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell .SH rclone completion fish .PP -generate the autocompletion script for fish +Generate the autocompletion script for fish .SS Synopsis .PP Generate the autocompletion script for the fish shell. .PP -To load completions in your current shell session: $ rclone completion -fish | source +To load completions in your current shell session: +.IP +.nf +\f[C] +rclone completion fish | source +\f[R] +.fi .PP -To load completions for every new session, execute once: $ rclone -completion fish > \[ti]/.config/fish/completions/rclone.fish +To load completions for every new session, execute once: +.IP +.nf +\f[C] +rclone completion fish > \[ti]/.config/fish/completions/rclone.fish +\f[R] +.fi .PP You will need to start a new shell for this setup to take effect. .IP @@ -2583,16 +2765,21 @@ not listed here. .SS SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell .SH rclone completion powershell .PP -generate the autocompletion script for powershell +Generate the autocompletion script for powershell .SS Synopsis .PP Generate the autocompletion script for powershell. .PP -To load completions in your current shell session: PS C:> rclone -completion powershell | Out-String | Invoke-Expression +To load completions in your current shell session: +.IP +.nf +\f[C] +rclone completion powershell | Out-String | Invoke-Expression +\f[R] +.fi .PP To load completions for every new session, add the output of the above command to your powershell profile. @@ -2616,10 +2803,10 @@ not listed here. .SS SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell .SH rclone completion zsh .PP -generate the autocompletion script for zsh +Generate the autocompletion script for zsh .SS Synopsis .PP Generate the autocompletion script for the zsh shell. @@ -2627,12 +2814,28 @@ Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: +.IP +.nf +\f[C] +echo \[dq]autoload -U compinit; compinit\[dq] >> \[ti]/.zshrc +\f[R] +.fi .PP -$ echo \[dq]autoload -U compinit; compinit\[dq] >> \[ti]/.zshrc -.PP -To load completions for every new session, execute once: # Linux: $ -rclone completion zsh > \[dq]${fpath[1]}/_rclone\[dq] # macOS: $ rclone -completion zsh > /usr/local/share/zsh/site-functions/_rclone +To load completions for every new session, execute once: +.SS Linux: +.IP +.nf +\f[C] +rclone completion zsh > \[dq]${fpath[1]}/_rclone\[dq] +\f[R] +.fi +.SS macOS: +.IP +.nf +\f[C] +rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone +\f[R] +.fi .PP You will need to start a new shell for this setup to take effect. .IP @@ -2655,7 +2858,7 @@ not listed here. .SS SEE ALSO .IP \[bu] 2 rclone completion (https://rclone.org/commands/rclone_completion/) - -generate the autocompletion script for the specified shell +Generate the autocompletion script for the specified shell .SH rclone config create .PP Create a new remote with name, type and options. @@ -3314,7 +3517,8 @@ directory named dest:path. .PP This can be used to upload single files to other than their current name. -If the source is a directory then it acts exactly like the copy command. +If the source is a directory then it acts exactly like the +copy (https://rclone.org/commands/rclone_copy/) command. .PP So .IP @@ -3373,9 +3577,12 @@ Copy url content to dest. Download a URL\[aq]s content and copy it to the destination without saving it in temporary storage. .PP -Setting \f[C]--auto-filename\f[R] will cause the file name to be -retrieved from the URL (after any redirections) and used in the -destination path. +Setting \f[C]--auto-filename\f[R] will attempt to automatically +determine the filename from the URL (after any redirections) and used in +the destination path. +With \f[C]--auto-filename-header\f[R] in addition, if a specific +filename is set in HTTP headers, it will be used instead of the name +from the URL. With \f[C]--print-filename\f[R] in addition, the resulting file name will be printed. .PP @@ -3394,11 +3601,12 @@ rclone copyurl https://example.com dest:path [flags] .IP .nf \f[C] - -a, --auto-filename Get the file name from the URL and use it for destination file path - -h, --help help for copyurl - --no-clobber Prevent overwriting file with same name - -p, --print-filename Print the resulting name from --auto-filename - --stdout Write the output to stdout rather than a file + -a, --auto-filename Get the file name from the URL and use it for destination file path + --header-filename Get the file name from the Content-Disposition header + -h, --help help for copyurl + --no-clobber Prevent overwriting file with same name + -p, --print-filename Print the resulting name from --auto-filename + --stdout Write the output to stdout rather than a file \f[R] .fi .PP @@ -3413,8 +3621,10 @@ commands, flags and backends. Cryptcheck checks the integrity of a crypted remote. .SS Synopsis .PP -rclone cryptcheck checks a remote against a crypted remote. -This is the equivalent of running rclone check, but able to check the +rclone cryptcheck checks a remote against a +crypted (https://rclone.org/crypt/) remote. +This is the equivalent of running rclone +check (https://rclone.org/commands/rclone_check/), but able to check the checksums of the crypted remote. .PP For it to work the underlying remote of the cryptedremote must support @@ -3513,7 +3723,8 @@ rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. .PP -If you supply the --reverse flag, it will return encrypted file names. +If you supply the \f[C]--reverse\f[R] flag, it will return encrypted +file names. .PP use it like this .IP @@ -3526,8 +3737,9 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2 .fi .PP Another way to accomplish this is by using the -\f[C]rclone backend encode\f[R] (or \f[C]decode\f[R])command. -See the documentation on the \f[C]crypt\f[R] overlay for more info. +\f[C]rclone backend encode\f[R] (or \f[C]decode\f[R]) command. +See the documentation on the crypt (https://rclone.org/crypt/) overlay +for more info. .IP .nf \f[C] @@ -3584,7 +3796,7 @@ Output completion script for a given shell. .SS Synopsis .PP Generates a shell completion script for rclone. -Run with --help to list the supported shells. +Run with \f[C]--help\f[R] to list the supported shells. .SS Options .IP .nf @@ -3804,6 +4016,10 @@ If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote. .PP +For the MD5 and SHA1 algorithms there are also dedicated commands, +md5sum (https://rclone.org/commands/rclone_md5sum/) and +sha1sum (https://rclone.org/commands/rclone_sha1sum/). +.PP This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a @@ -3821,6 +4037,7 @@ Supported hashes are: * crc32 * sha256 * dropbox + * hidrive * mailru * quickxor \f[R] @@ -3920,7 +4137,7 @@ List all the remotes in the config file. .PP rclone listremotes lists all the available remotes from the config file. .PP -When uses with the -l flag it lists the types too. +When used with the \f[C]--long\f[R] flag it lists the types too. .IP .nf \f[C] @@ -3966,7 +4183,7 @@ fubuwic \f[R] .fi .PP -Use the --format option to control what gets listed. +Use the \f[C]--format\f[R] option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: .IP @@ -3981,12 +4198,13 @@ o - Original ID of underlying object m - MimeType of object if known e - encrypted name T - tier of storage if known, e.g. \[dq]Hot\[dq] or \[dq]Cool\[dq] +M - Metadata of object in JSON blob format, eg {\[dq]key\[dq]:\[dq]value\[dq]} \f[R] .fi .PP So if you wanted the path, size and modification time, you would use ---format \[dq]pst\[dq], or maybe --format \[dq]tsp\[dq] to put the path -last. +\f[C]--format \[dq]pst\[dq]\f[R], or maybe +\f[C]--format \[dq]tsp\[dq]\f[R] to put the path last. .PP Eg .IP @@ -4002,7 +4220,7 @@ $ rclone lsf --format \[dq]tsp\[dq] swift:bucket .fi .PP If you specify \[dq]h\[dq] in the format you will get the MD5 hash by -default, use the \[dq]--hash\[dq] flag to change which hash you want. +default, use the \f[C]--hash\f[R] flag to change which hash you want. Note that this can be returned as an empty string if it isn\[aq]t available on the object (and for directories), \[dq]ERROR\[dq] if there was an error reading it from the object and \[dq]UNSUPPORTED\[dq] if @@ -4032,7 +4250,7 @@ cd65ac234e6fea5925974a51cdd865cc canole (Though \[dq]rclone md5sum .\[dq] is an easier way of typing this.) .PP By default the separator is \[dq];\[dq] this can be changed with the ---separator flag. +\f[C]--separator\f[R] flag. Note that separators aren\[aq]t escaped in the path so putting it last is a good strategy. .PP @@ -4063,8 +4281,9 @@ test.sh,449 \f[R] .fi .PP -Note that the --absolute parameter is useful for making lists of files -to pass to an rclone copy with the --files-from-raw flag. +Note that the \f[C]--absolute\f[R] parameter is useful for making lists +of files to pass to an rclone copy with the \f[C]--files-from-raw\f[R] +flag. .PP For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): @@ -4141,46 +4360,62 @@ List directories and objects in the path in JSON format. List directories and objects in the path in JSON format. .PP The output is an array of Items, where each Item looks like this +.IP +.nf +\f[C] +{ + \[dq]Hashes\[dq] : { + \[dq]SHA-1\[dq] : \[dq]f572d396fae9206628714fb2ce00f72e94f2258f\[dq], + \[dq]MD5\[dq] : \[dq]b1946ac92492d2347c6235b4d2611184\[dq], + \[dq]DropboxHash\[dq] : \[dq]ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc\[dq] + }, + \[dq]ID\[dq]: \[dq]y2djkhiujf83u33\[dq], + \[dq]OrigID\[dq]: \[dq]UYOJVTUW00Q1RzTDA\[dq], + \[dq]IsBucket\[dq] : false, + \[dq]IsDir\[dq] : false, + \[dq]MimeType\[dq] : \[dq]application/octet-stream\[dq], + \[dq]ModTime\[dq] : \[dq]2017-05-31T16:15:57.034468261+01:00\[dq], + \[dq]Name\[dq] : \[dq]file.txt\[dq], + \[dq]Encrypted\[dq] : \[dq]v0qpsdq8anpci8n929v3uu9338\[dq], + \[dq]EncryptedPath\[dq] : \[dq]kja9098349023498/v0qpsdq8anpci8n929v3uu9338\[dq], + \[dq]Path\[dq] : \[dq]full/path/goes/here/file.txt\[dq], + \[dq]Size\[dq] : 6, + \[dq]Tier\[dq] : \[dq]hot\[dq], +} +\f[R] +.fi .PP -{ \[dq]Hashes\[dq] : { \[dq]SHA-1\[dq] : -\[dq]f572d396fae9206628714fb2ce00f72e94f2258f\[dq], \[dq]MD5\[dq] : -\[dq]b1946ac92492d2347c6235b4d2611184\[dq], \[dq]DropboxHash\[dq] : -\[dq]ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc\[dq] -}, \[dq]ID\[dq]: \[dq]y2djkhiujf83u33\[dq], \[dq]OrigID\[dq]: -\[dq]UYOJVTUW00Q1RzTDA\[dq], \[dq]IsBucket\[dq] : false, \[dq]IsDir\[dq] -: false, \[dq]MimeType\[dq] : \[dq]application/octet-stream\[dq], -\[dq]ModTime\[dq] : \[dq]2017-05-31T16:15:57.034468261+01:00\[dq], -\[dq]Name\[dq] : \[dq]file.txt\[dq], \[dq]Encrypted\[dq] : -\[dq]v0qpsdq8anpci8n929v3uu9338\[dq], \[dq]EncryptedPath\[dq] : -\[dq]kja9098349023498/v0qpsdq8anpci8n929v3uu9338\[dq], \[dq]Path\[dq] : -\[dq]full/path/goes/here/file.txt\[dq], \[dq]Size\[dq] : 6, -\[dq]Tier\[dq] : \[dq]hot\[dq], } +If \f[C]--hash\f[R] is not specified the Hashes property won\[aq]t be +emitted. +The types of hash can be specified with the \f[C]--hash-type\f[R] +parameter (which may be repeated). +If \f[C]--hash-type\f[R] is set then it implies \f[C]--hash\f[R]. .PP -If --hash is not specified the Hashes property won\[aq]t be emitted. -The types of hash can be specified with the --hash-type parameter (which -may be repeated). -If --hash-type is set then it implies --hash. -.PP -If --no-modtime is specified then ModTime will be blank. +If \f[C]--no-modtime\f[R] is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift). .PP -If --no-mimetype is specified then MimeType will be blank. +If \f[C]--no-mimetype\f[R] is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift). .PP -If --encrypted is not specified the Encrypted won\[aq]t be emitted. +If \f[C]--encrypted\f[R] is not specified the Encrypted won\[aq]t be +emitted. .PP -If --dirs-only is not specified files in addition to directories are -returned +If \f[C]--dirs-only\f[R] is not specified files in addition to +directories are returned .PP -If --files-only is not specified directories in addition to the files -will be returned. +If \f[C]--files-only\f[R] is not specified directories in addition to +the files will be returned. .PP -if --stat is set then a single JSON blob will be returned about the item -pointed to. +If \f[C]--metadata\f[R] is set then an additional Metadata key will be +returned. +This will have metdata in rclone standard format as a JSON object. +.PP +if \f[C]--stat\f[R] is set then a single JSON blob will be returned +about the item pointed to. This will return an error if the item isn\[aq]t found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn\[aq]t found it will return an empty directory as it @@ -4192,7 +4427,8 @@ listed. If \[dq]remote:path\[dq] contains the file \[dq]subfolder/file.txt\[dq], the Path for \[dq]file.txt\[dq] will be \[dq]subfolder/file.txt\[dq], not \[dq]remote:path/subfolder/file.txt\[dq]. -When used without --recursive the Path will always be the same as Name. +When used without \f[C]--recursive\f[R] the Path will always be the same +as Name. .PP If the directory is a bucket in a bucket-based backend, then \[dq]IsBucket\[dq] will be set to true. @@ -4249,7 +4485,7 @@ rclone lsjson remote:path [flags] .nf \f[C] --dirs-only Show only directories in the listing - -M, --encrypted Show the encrypted names + --encrypted Show the encrypted names --files-only Show only files in the listing --hash Include hashes in the output (may take longer) --hash-type stringArray Show only this hash type (may be repeated) @@ -4357,11 +4593,11 @@ feature at all, then 1 PiB is set as both the total and the free size. To run rclone mount on Windows, you will need to download and install WinFsp (http://www.secfs.net/winfsp/). .PP -WinFsp (https://github.com/billziss-gh/winfsp) is an open-source Windows -File System Proxy which makes it easy to write user space file systems -for Windows. +WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File +System Proxy which makes it easy to write user space file systems for +Windows. It provides a FUSE emulation layer which rclone uses combination with -cgofuse (https://github.com/billziss-gh/cgofuse). +cgofuse (https://github.com/winfsp/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows. .SS Mounting modes on windows @@ -4564,7 +4800,7 @@ to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher -infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)). +infrastructure (https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the \f[C]--config\f[R] (https://rclone.org/docs/#config-config-file) option. @@ -4781,7 +5017,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -4972,6 +5208,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -5023,7 +5293,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -5041,8 +5311,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -5065,15 +5335,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -5081,10 +5351,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -5093,6 +5363,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -5144,7 +5426,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --noapplexattr Ignore all \[dq]com.apple.*\[dq] extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -5152,6 +5434,8 @@ rclone mount remote:path /path/to/mountpoint [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -5180,7 +5464,8 @@ directory named dest:path. .PP This can be used to rename files or upload single files to other than their existing name. -If the source is a directory then it acts exactly like the move command. +If the source is a directory then it acts exactly like the +move (https://rclone.org/commands/rclone_move/) command. .PP So .IP @@ -5249,7 +5534,9 @@ builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. .PP -Here are the keys - press \[aq]?\[aq] to toggle the help on and off +You can interact with the user interface using key presses, press +\[aq]?\[aq] to toggle the help on and off. +The supported keys are: .IP .nf \f[C] @@ -5262,11 +5549,33 @@ Here are the keys - press \[aq]?\[aq] to toggle the help on and off u toggle human-readable format n,s,C,A sort by name,size,count,average size d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories y copy current path to clipboard Y display current path - \[ha]L refresh screen + \[ha]L refresh screen (fix screen corruption) ? to toggle help on and off - q/ESC/c-C to quit + q/ESC/\[ha]c to quit +\f[R] +.fi +.PP +Listed files/directories may be prefixed by a one-character flag, some +of them combined with a description in brackes at end of line. +These flags have the following meaning: +.IP +.nf +\f[C] +e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) +\[ti] means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). +\&. means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) +! means an error occurred while reading this directory \f[R] .fi .PP @@ -5274,9 +5583,14 @@ This an homage to the ncdu tool (https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. .PP -Note that it might take some time to delete big files/folders. +Note that it might take some time to delete big files/directories. The UI won\[aq]t respond in the meantime since the deletion is done synchronously. +.PP +For a non-interactive listing of the remote, see the +tree (https://rclone.org/commands/rclone_tree/) command. +To just get the total size of the remote you can also use the +size (https://rclone.org/commands/rclone_size/) command. .IP .nf \f[C] @@ -5317,8 +5631,12 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. -.PP +.IP +.nf +\f[C] echo \[dq]secretpassword\[dq] | rclone obscure - +\f[R] +.fi .PP If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. @@ -5352,26 +5670,30 @@ Run a command against a running rclone. .SS Synopsis .PP This runs a command against a running rclone. -Use the --url flag to specify an non default URL to connect on. +Use the \f[C]--url\f[R] flag to specify an non default URL to connect +on. This can be either a \[dq]:port\[dq] which is taken to mean \[dq]http://localhost:port\[dq] or a \[dq]host:port\[dq] which is taken to mean \[dq]http://host:port\[dq] .PP -A username and password can be passed in with --user and --pass. +A username and password can be passed in with \f[C]--user\f[R] and +\f[C]--pass\f[R]. .PP -Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, ---user, --pass. +Note that \f[C]--rc-addr\f[R], \f[C]--rc-user\f[R], \f[C]--rc-pass\f[R] +will be read also for \f[C]--url\f[R], \f[C]--user\f[R], +\f[C]--pass\f[R]. .PP Arguments should be passed in as parameter=value. .PP The result will be returned as a JSON object by default. .PP -The --json parameter can be used to pass in a JSON blob as an input -instead of key=value arguments. +The \f[C]--json\f[R] parameter can be used to pass in a JSON blob as an +input instead of key=value arguments. This is the only way of passing in more complicated values. .PP -The -o/--opt option can be used to set a key \[dq]opt\[dq] with key, -value options in the form \[dq]-o key=value\[dq] or \[dq]-o key\[dq]. +The \f[C]-o\f[R]/\f[C]--opt\f[R] option can be used to set a key +\[dq]opt\[dq] with key, value options in the form \f[C]-o key=value\f[R] +or \f[C]-o key\f[R]. It can be repeated as many times as required. This is useful for rc commands which take the \[dq]opt\[dq] parameter which by convention is a dictionary of strings. @@ -5390,8 +5712,8 @@ Will place this in the \[dq]opt\[dq] value \f[R] .fi .PP -The -a/--arg option can be used to set strings in the \[dq]arg\[dq] -value. +The \f[C]-a\f[R]/\f[C]--arg\f[R] option can be used to set strings in +the \[dq]arg\[dq] value. It can be repeated as many times as required. This is useful for rc commands which take the \[dq]arg\[dq] parameter which by convention is a list of strings. @@ -5410,8 +5732,8 @@ Will place this in the \[dq]arg\[dq] value \f[R] .fi .PP -Use --loopback to connect to the rclone instance running \[dq]rclone -rc\[dq]. +Use \f[C]--loopback\f[R] to connect to the rclone instance running +\f[C]rclone rc\f[R]. This is very useful for testing commands without having to run an rclone rc server, e.g.: .IP @@ -5421,7 +5743,7 @@ rclone rc --loopback operations/about fs=/ \f[R] .fi .PP -Use \[dq]rclone rc\[dq] to see a list of all possible commands. +Use \f[C]rclone rc\f[R] to see a list of all possible commands. .IP .nf \f[C] @@ -5481,13 +5803,13 @@ please see there. Generally speaking, setting this cutoff too high will decrease your performance. .PP -Use the |--size| flag to preallocate the file in advance at the remote -end and actually stream it, even if remote backend doesn\[aq]t support -streaming. +Use the \f[C]--size\f[R] flag to preallocate the file in advance at the +remote end and actually stream it, even if remote backend doesn\[aq]t +support streaming. .PP -|--size| should be the exact size of the input stream in bytes. -If the size of the stream is different in length to the |--size| passed -in then the transfer will likely fail. +\f[C]--size\f[R] should be the exact size of the input stream in bytes. +If the size of the stream is different in length to the \f[C]--size\f[R] +passed in then the transfer will likely fail. .PP Note that the upload can also not be retried because the data is not kept around until the upload succeeds. @@ -5559,15 +5881,17 @@ that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the \f[C]--leave-root\f[R] flag. .PP -Use command \f[C]rmdir\f[R] to delete just the empty directory given by -path, not recurse. +Use command rmdir (https://rclone.org/commands/rclone_rmdir/) to delete +just the empty directory given by path, not recurse. .PP This is useful for tidying up remotes that rclone has left a lot of empty directories in. -For example the \f[C]delete\f[R] command will delete files but leave the -directory structure (unless used with option \f[C]--rmdirs\f[R]). +For example the delete (https://rclone.org/commands/rclone_delete/) +command will delete files but leave the directory structure (unless used +with option \f[C]--rmdirs\f[R]). .PP -To delete a path and any objects in it, use \f[C]purge\f[R] command. +To delete a path and any objects in it, use +purge (https://rclone.org/commands/rclone_purge/) command. .IP .nf \f[C] @@ -5689,9 +6013,8 @@ commands, flags and backends. Serve a remote over a protocol. .SS Synopsis .PP -rclone serve is used to serve a remote over a given protocol. -This command requires the use of a subcommand to specify the protocol, -e.g. +Serve a remote over a given protocol. +Requires the use of a subcommand to specify the protocol, e.g. .IP .nf \f[C] @@ -5740,14 +6063,13 @@ rclone serve sftp (https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP. .IP \[bu] 2 rclone serve webdav (https://rclone.org/commands/rclone_serve_webdav/) - -Serve remote:path over webdav. +Serve remote:path over WebDAV. .SH rclone serve dlna .PP Serve remote:path over DLNA .SS Synopsis .PP -rclone serve dlna is a DLNA media server for media stored in an rclone -remote. +Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. @@ -5790,7 +6112,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -5981,6 +6303,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -6032,7 +6388,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -6050,8 +6406,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -6074,15 +6430,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -6090,10 +6446,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -6102,6 +6458,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -6139,7 +6507,7 @@ rclone serve dlna remote:path [flags] --no-modtime Don\[aq]t read/write the modification time (can speed things up) --no-seek Don\[aq]t allow seeking in files --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s) @@ -6147,6 +6515,8 @@ rclone serve dlna remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -6236,7 +6606,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -6427,6 +6797,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -6478,7 +6882,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -6496,8 +6900,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -6520,15 +6924,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -6536,10 +6940,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -6548,6 +6952,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -6602,7 +7018,7 @@ rclone serve docker [flags] --noapplexattr Ignore all \[dq]com.apple.*\[dq] extended attributes (supported on OSX only) -o, --option stringArray Option for libfuse/WinFsp (repeat if required) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --socket-addr string Address or absolute path (default: /run/docker/plugins/rclone.sock) --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) @@ -6612,6 +7028,8 @@ rclone serve docker [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -6635,10 +7053,9 @@ remote over a protocol. Serve remote:path over FTP. .SS Synopsis .PP -rclone serve ftp implements a basic ftp server to serve the remote over -FTP protocol. -This can be viewed with a ftp client or you can make a remote of type -ftp to read and write it. +Run a basic FTP server to serve a remote over FTP protocol. +This can be viewed with a FTP client or you can make a remote of type +FTP to read and write it. .SS Server options .PP Use --addr to specify which IP address and port the server should listen @@ -6674,7 +7091,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -6865,6 +7282,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -6916,7 +7367,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -6934,8 +7385,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -6958,15 +7409,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -6974,10 +7425,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -6986,6 +7437,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -7120,7 +7583,7 @@ rclone serve ftp remote:path [flags] --passive-port string Passive port range to use (default \[dq]30000-32000\[dq]) --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) --public-ip string Public IP address to advertise for passive connections - --read-only Mount read-only + --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default \[dq]anonymous\[dq]) @@ -7129,6 +7592,8 @@ rclone serve ftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -7150,60 +7615,63 @@ remote over a protocol. Serve the remote over HTTP. .SS Synopsis .PP -rclone serve http implements a basic web server to serve the remote over -HTTP. +Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. .PP You can use the filter flags (e.g. ---include, --exclude) to control what is served. +\f[C]--include\f[R], \f[C]--exclude\f[R]) to control what is served. .PP The server will log errors. -Use -v to see access logs. +Use \f[C]-v\f[R] to see access logs. .PP ---bwlimit will be respected for file transfers. -Use --stats to control the stats printing. +\f[C]--bwlimit\f[R] will be respected for file transfers. +Use \f[C]--stats\f[R] to control the stats printing. .SS Server options .PP -Use --addr to specify which IP address and port the server should listen -on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. +Use \f[C]--addr\f[R] to specify which IP address and port the server +should listen on, eg \f[C]--addr 1.2.3.4:8000\f[R] or +\f[C]--addr :8080\f[R] to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. .PP -If you set --addr to listen on a public or LAN accessible IP address -then using Authentication is advised - see the next section for info. +If you set \f[C]--addr\f[R] to listen on a public or LAN accessible IP +address then using Authentication is advised - see the next section for +info. .PP ---server-read-timeout and --server-write-timeout can be used to control -the timeouts on the server. +\f[C]--server-read-timeout\f[R] and \f[C]--server-write-timeout\f[R] can +be used to control the timeouts on the server. Note that this is the total time for a transfer. .PP ---max-header-bytes controls the maximum number of bytes the server will -accept in the HTTP header. +\f[C]--max-header-bytes\f[R] controls the maximum number of bytes the +server will accept in the HTTP header. .PP ---baseurl controls the URL prefix that rclone serves from. +\f[C]--baseurl\f[R] controls the URL prefix that rclone serves from. By default rclone will serve from the root. -If you used --baseurl \[dq]/rclone\[dq] then rclone would serve from a -URL starting with \[dq]/rclone/\[dq]. +If you used \f[C]--baseurl \[dq]/rclone\[dq]\f[R] then rclone would +serve from a URL starting with \[dq]/rclone/\[dq]. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing \[dq]/\[dq] on ---baseurl, so --baseurl \[dq]rclone\[dq], --baseurl \[dq]/rclone\[dq] -and --baseurl \[dq]/rclone/\[dq] are all treated identically. +\f[C]--baseurl\f[R], so \f[C]--baseurl \[dq]rclone\[dq]\f[R], +\f[C]--baseurl \[dq]/rclone\[dq]\f[R] and +\f[C]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. .SS SSL/TLS .PP By default this will serve over http. If you want you can serve over https. -You will need to supply the --cert and --key flags. +You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags. If you wish to do client side certificate validation then you will need -to supply --client-ca also. +to supply \f[C]--client-ca\f[R] also. .PP ---cert should be a either a PEM encoded certificate or a concatenation -of that with the CA certificate. ---key should be the PEM encoded private key and --client-ca should be -the PEM encoded client certificate authority certificate. +\f[C]--cert\f[R] should be a either a PEM encoded certificate or a +concatenation of that with the CA certificate. +\f[C]--key\f[R] should be the PEM encoded private key and +\f[C]--client-ca\f[R] should be the PEM encoded client certificate +authority certificate. .SS Template .PP ---template allows a user to specify a custom markup template for http -and webdav serve functions. +\f[C]--template\f[R] allows a user to specify a custom markup template +for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: .PP @@ -7303,9 +7771,10 @@ T} By default this will serve files without needing a login. .PP You can either use an htpasswd file which can take lots of users, or set -a single username and password with the --user and --pass flags. +a single username and password with the \f[C]--user\f[R] and +\f[C]--pass\f[R] flags. .PP -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. +Use \f[C]--htpasswd /path/to/htpasswd\f[R] to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -7322,9 +7791,10 @@ htpasswd -B htpasswd anotherUser .PP The password file can be updated while rclone is running. .PP -Use --realm to set the authentication realm. +Use \f[C]--realm\f[R] to set the authentication realm. .PP -Use --salt to change the password hashing salt from the default. +Use \f[C]--salt\f[R] to change the password hashing salt from the +default. .SS VFS - Virtual File System .PP This command uses the VFS layer. @@ -7344,7 +7814,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -7535,6 +8005,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -7586,7 +8090,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -7604,8 +8108,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -7628,15 +8132,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -7644,10 +8148,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -7656,6 +8160,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -7698,7 +8214,7 @@ rclone serve http remote:path [flags] --no-seek Don\[aq]t allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication --salt string Password hashing salt (default \[dq]dlPL2MqE\[dq]) --server-read-timeout duration Timeout for server reading data (default 1h0m0s) @@ -7712,6 +8228,8 @@ rclone serve http remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -7733,7 +8251,8 @@ remote over a protocol. Serve the remote for restic\[aq]s REST API. .SS Synopsis .PP -rclone serve restic implements restic\[aq]s REST backend API over HTTP. +Run a basic web server to serve a remove over restic\[aq]s REST backend +API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. .PP @@ -7743,8 +8262,8 @@ backups. The server will log errors. Use -v to see access logs. .PP ---bwlimit will be respected for file transfers. -Use --stats to control the stats printing. +\f[C]--bwlimit\f[R] will be respected for file transfers. +Use \f[C]--stats\f[R] to control the stats printing. .SS Setting up rclone for use by restic .PP First set up a remote for your chosen cloud @@ -7767,12 +8286,12 @@ Where you can replace \[dq]backup\[dq] in the above by whatever path in the remote you wish to use. .PP By default this will serve on \[dq]localhost:8080\[dq] you can change -this with use of the \[dq]--addr\[dq] flag. +this with use of the \f[C]--addr\f[R] flag. .PP You might wish to start this server on boot. .PP -Adding --cache-objects=false will cause rclone to stop caching objects -returned from the List call. +Adding \f[C]--cache-objects=false\f[R] will cause rclone to stop caching +objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory. .SS Setting up restic to use rclone @@ -7824,37 +8343,40 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ .fi .SS Private repositories .PP -The \[dq]--private-repos\[dq] flag can be used to limit users to +The\f[C]--private-repos\f[R] flag can be used to limit users to repositories starting with a path of \f[C]//\f[R]. .SS Server options .PP -Use --addr to specify which IP address and port the server should listen -on, e.g. ---addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. +Use \f[C]--addr\f[R] to specify which IP address and port the server +should listen on, e.g. +\f[C]--addr 1.2.3.4:8000\f[R] or \f[C]--addr :8080\f[R] to listen to all +IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. .PP -If you set --addr to listen on a public or LAN accessible IP address -then using Authentication is advised - see the next section for info. +If you set \f[C]--addr\f[R] to listen on a public or LAN accessible IP +address then using Authentication is advised - see the next section for +info. .PP ---server-read-timeout and --server-write-timeout can be used to control -the timeouts on the server. +\f[C]--server-read-timeout\f[R] and \f[C]--server-write-timeout\f[R] can +be used to control the timeouts on the server. Note that this is the total time for a transfer. .PP ---max-header-bytes controls the maximum number of bytes the server will -accept in the HTTP header. +\f[C]--max-header-bytes\f[R] controls the maximum number of bytes the +server will accept in the HTTP header. .PP ---baseurl controls the URL prefix that rclone serves from. +\f[C]--baseurl\f[R] controls the URL prefix that rclone serves from. By default rclone will serve from the root. -If you used --baseurl \[dq]/rclone\[dq] then rclone would serve from a -URL starting with \[dq]/rclone/\[dq]. +If you used \f[C]--baseurl \[dq]/rclone\[dq]\f[R] then rclone would +serve from a URL starting with \[dq]/rclone/\[dq]. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing \[dq]/\[dq] on ---baseurl, so --baseurl \[dq]rclone\[dq], --baseurl \[dq]/rclone\[dq] -and --baseurl \[dq]/rclone/\[dq] are all treated identically. +\f[C]--baseurl\f[R], so \f[C]--baseurl \[dq]rclone\[dq]\f[R], +\f[C]--baseurl \[dq]/rclone\[dq]\f[R] and +\f[C]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. .PP ---template allows a user to specify a custom markup template for http -and webdav serve functions. +\f[C]--template\f[R] allows a user to specify a custom markup template +for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: .PP @@ -7954,9 +8476,10 @@ T} By default this will serve files without needing a login. .PP You can either use an htpasswd file which can take lots of users, or set -a single username and password with the --user and --pass flags. +a single username and password with the \f[C]--user\f[R] and +\f[C]--pass\f[R] flags. .PP -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. +Use \f[C]--htpasswd /path/to/htpasswd\f[R] to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -7973,19 +8496,20 @@ htpasswd -B htpasswd anotherUser .PP The password file can be updated while rclone is running. .PP -Use --realm to set the authentication realm. +Use \f[C]--realm\f[R] to set the authentication realm. .SS SSL/TLS .PP -By default this will serve over http. -If you want you can serve over https. -You will need to supply the --cert and --key flags. +By default this will serve over HTTP. +If you want you can serve over HTTPS. +You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags. If you wish to do client side certificate validation then you will need -to supply --client-ca also. +to supply \f[C]--client-ca\f[R] also. .PP ---cert should be either a PEM encoded certificate or a concatenation of -that with the CA certificate. ---key should be the PEM encoded private key and --client-ca should be -the PEM encoded client certificate authority certificate. +\f[C]--cert\f[R] should be either a PEM encoded certificate or a +concatenation of that with the CA certificate. +\f[C]--key\f[R] should be the PEM encoded private key and +\f[C]--client-ca\f[R] should be the PEM encoded client certificate +authority certificate. .IP .nf \f[C] @@ -8028,24 +8552,24 @@ remote over a protocol. Serve the remote over SFTP. .SS Synopsis .PP -rclone serve sftp implements an SFTP server to serve the remote over -SFTP. +Run a SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. .PP You can use the filter flags (e.g. ---include, --exclude) to control what is served. +\f[C]--include\f[R], \f[C]--exclude\f[R]) to control what is served. .PP The server will log errors. -Use -v to see access logs. +Use \f[C]-v\f[R] to see access logs. .PP ---bwlimit will be respected for file transfers. -Use --stats to control the stats printing. +\f[C]--bwlimit\f[R] will be respected for file transfers. +Use \f[C]--stats\f[R] to control the stats printing. .PP You must provide some means of authentication, either with ---user/--pass, an authorized keys file (specify location with ---authorized-keys - the default is the same as ssh), an --auth-proxy, or -set the --no-auth flag for no authentication when logging in. +\f[C]--user\f[R]/\f[C]--pass\f[R], an authorized keys file (specify +location with \f[C]--authorized-keys\f[R] - the default is the same as +ssh), an \f[C]--auth-proxy\f[R], or set the \f[C]--no-auth\f[R] flag for +no authentication when logging in. .PP Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp @@ -8053,19 +8577,19 @@ backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. .PP -If you don\[aq]t supply a host --key then rclone will generate rsa, -ecdsa and ed25519 variants, and cache them for later use in rclone\[aq]s -cache directory (see \[dq]rclone help flags cache-dir\[dq]) in the -\[dq]serve-sftp\[dq] directory. +If you don\[aq]t supply a host \f[C]--key\f[R] then rclone will generate +rsa, ecdsa and ed25519 variants, and cache them for later use in +rclone\[aq]s cache directory (see \f[C]rclone help flags cache-dir\f[R]) +in the \[dq]serve-sftp\[dq] directory. .PP By default the server binds to localhost:2022 - if you want it to be -reachable externally then supply \[dq]--addr :2022\[dq] for example. +reachable externally then supply \f[C]--addr :2022\f[R] for example. .PP -Note that the default of \[dq]--vfs-cache-mode off\[dq] is fine for the +Note that the default of \f[C]--vfs-cache-mode off\f[R] is fine for the rclone sftp backend, but it may not be with other SFTP clients. .PP -If --stdio is specified, rclone will serve SFTP over stdio, which can be -used with sshd via \[ti]/.ssh/authorized_keys, for example: +If \f[C]--stdio\f[R] is specified, rclone will serve SFTP over stdio, +which can be used with sshd via \[ti]/.ssh/authorized_keys, for example: .IP .nf \f[C] @@ -8073,8 +8597,8 @@ restrict,command=\[dq]rclone serve sftp --stdio ./photos\[dq] ssh-rsa ... \f[R] .fi .PP -On the client you need to set \[dq]--transfers 1\[dq] when using ---stdio. +On the client you need to set \f[C]--transfers 1\f[R] when using +\f[C]--stdio\f[R]. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to \[dq]corrupted on transfer\[dq] errors. This is the case because the client chooses indiscriminately which @@ -8083,9 +8607,9 @@ the state of the filing system. .PP The \[dq]restrict\[dq] in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. -Omitting \[dq]restrict\[dq] and using --sftp-path-override to enable -checksumming is possible but less secure and you could use the SFTP -server provided by OpenSSH in this case. +Omitting \[dq]restrict\[dq] and using \f[C]--sftp-path-override\f[R] to +enable checksumming is possible but less secure and you could use the +SFTP server provided by OpenSSH in this case. .SS VFS - Virtual File System .PP This command uses the VFS layer. @@ -8105,7 +8629,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -8296,6 +8820,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -8347,7 +8905,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -8365,8 +8923,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -8389,15 +8947,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -8405,10 +8963,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -8417,6 +8975,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -8550,7 +9120,7 @@ rclone serve sftp remote:path [flags] --no-seek Don\[aq]t allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --stdio Run an sftp server on run stdin/stdout --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) @@ -8560,6 +9130,8 @@ rclone serve sftp remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -8578,14 +9150,14 @@ rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. .SH rclone serve webdav .PP -Serve remote:path over webdav. +Serve remote:path over WebDAV. .SS Synopsis .PP -rclone serve webdav implements a basic webdav server to serve the remote -over HTTP via the webdav protocol. -This can be viewed with a webdav client, through a web browser, or you -can make a remote of type webdav to read and write it. -.SS Webdav options +Run a basic WebDAV server to serve a remote over HTTP via the WebDAV +protocol. +This can be viewed with a WebDAV client, through a web browser, or you +can make a remote of type WebDAV to read and write it. +.SS WebDAV options .SS --etag-hash .PP This controls the ETag header. @@ -8595,37 +9167,40 @@ object. If this flag is set to \[dq]auto\[dq] then rclone will choose the first supported hash on the backend or you can use a named hash such as \[dq]MD5\[dq] or \[dq]SHA-1\[dq]. -.PP -Use \[dq]rclone hashsum\[dq] to see the full list. +Use the hashsum (https://rclone.org/commands/rclone_hashsum/) command to +see the full list. .SS Server options .PP -Use --addr to specify which IP address and port the server should listen -on, e.g. ---addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. +Use \f[C]--addr\f[R] to specify which IP address and port the server +should listen on, e.g. +\f[C]--addr 1.2.3.4:8000\f[R] or \f[C]--addr :8080\f[R] to listen to all +IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. .PP -If you set --addr to listen on a public or LAN accessible IP address -then using Authentication is advised - see the next section for info. +If you set \f[C]--addr\f[R] to listen on a public or LAN accessible IP +address then using Authentication is advised - see the next section for +info. .PP ---server-read-timeout and --server-write-timeout can be used to control -the timeouts on the server. +\f[C]--server-read-timeout\f[R] and \f[C]--server-write-timeout\f[R] can +be used to control the timeouts on the server. Note that this is the total time for a transfer. .PP ---max-header-bytes controls the maximum number of bytes the server will -accept in the HTTP header. +\f[C]--max-header-bytes\f[R] controls the maximum number of bytes the +server will accept in the HTTP header. .PP ---baseurl controls the URL prefix that rclone serves from. +\f[C]--baseurl\f[R] controls the URL prefix that rclone serves from. By default rclone will serve from the root. -If you used --baseurl \[dq]/rclone\[dq] then rclone would serve from a -URL starting with \[dq]/rclone/\[dq]. +If you used \f[C]--baseurl \[dq]/rclone\[dq]\f[R] then rclone would +serve from a URL starting with \[dq]/rclone/\[dq]. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing \[dq]/\[dq] on ---baseurl, so --baseurl \[dq]rclone\[dq], --baseurl \[dq]/rclone\[dq] -and --baseurl \[dq]/rclone/\[dq] are all treated identically. +\f[C]--baseurl\f[R], so \f[C]--baseurl \[dq]rclone\[dq]\f[R], +\f[C]--baseurl \[dq]/rclone\[dq]\f[R] and +\f[C]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. .PP ---template allows a user to specify a custom markup template for http -and webdav serve functions. +\f[C]--template\f[R] allows a user to specify a custom markup template +for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: .PP @@ -8725,9 +9300,10 @@ T} By default this will serve files without needing a login. .PP You can either use an htpasswd file which can take lots of users, or set -a single username and password with the --user and --pass flags. +a single username and password with the \f[C]--user\f[R] and +\f[C]--pass\f[R] flags. .PP -Use --htpasswd /path/to/htpasswd to provide an htpasswd file. +Use \f[C]--htpasswd /path/to/htpasswd\f[R] to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. @@ -8744,19 +9320,20 @@ htpasswd -B htpasswd anotherUser .PP The password file can be updated while rclone is running. .PP -Use --realm to set the authentication realm. +Use \f[C]--realm\f[R] to set the authentication realm. .SS SSL/TLS .PP -By default this will serve over http. -If you want you can serve over https. -You will need to supply the --cert and --key flags. +By default this will serve over HTTP. +If you want you can serve over HTTPS. +You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags. If you wish to do client side certificate validation then you will need -to supply --client-ca also. +to supply \f[C]--client-ca\f[R] also. .PP ---cert should be either a PEM encoded certificate or a concatenation of -that with the CA certificate. ---key should be the PEM encoded private key and --client-ca should be -the PEM encoded client certificate authority certificate. +\f[C]--cert\f[R] should be either a PEM encoded certificate or a +concatenation of that with the CA certificate. +\f[C]--key\f[R] should be the PEM encoded private key and +\f[C]--client-ca\f[R] should be the PEM encoded client certificate +authority certificate. .SS VFS - Virtual File System .PP This command uses the VFS layer. @@ -8776,7 +9353,7 @@ files and directories (but not the data) in memory. Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a directory should be considered up to date and not refreshed from the backend. -Changes made through the mount will appear immediately or invalidate the +Changes made through the VFS will appear immediately or invalidate the cache. .IP .nf @@ -8967,6 +9544,40 @@ In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn\[aq]t support sparse files and it will log an ERROR message if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. .SS VFS Chunked Reading .PP When rclone reads files from a remote it reads them in chunks. @@ -9018,7 +9629,7 @@ transaction. --no-checksum Don\[aq]t compare checksums on up/download. --no-modtime Don\[aq]t read/write the modification time (can speed things up). --no-seek Don\[aq]t allow seeking in files. ---read-only Mount read-only. +--read-only Only allow read-only access. \f[R] .fi .PP @@ -9036,8 +9647,8 @@ These flags only come into effect when not using an on disk cache file. .PP When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value writes or full), the global flag \f[C]--transfers\f[R] can be set to -adjust the number of parallel uploads of modified files from cache (the -related global flag \f[C]--checkers\f[R] have no effect on mount). +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). .IP .nf \f[C] @@ -9060,15 +9671,15 @@ Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. .PP -The \f[C]--vfs-case-insensitive\f[R] mount flag controls how rclone +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone handles these two cases. -If its value is \[dq]false\[dq], rclone passes file names to the mounted -file system as-is. -If the flag is \[dq]true\[dq] (or appears without a value on command +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command line), rclone may perform a \[dq]fixup\[dq] as explained below. .PP The user may specify a file name to open/delete/rename/etc with a case -different than what is stored on mounted file system. +different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a @@ -9076,10 +9687,10 @@ name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by -an underlying mounted file system. +the underlying remote. .PP Note that case sensitivity of the operating system running rclone (the -target) may differ from case sensitivity of a file system mounted by +target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether \[dq]fixup\[dq] is performed to satisfy the target. @@ -9088,6 +9699,18 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi .SS Alternate report of used bytes .PP Some backends, most notably S3, do not report the amount of bytes used. @@ -9226,7 +9849,7 @@ rclone serve webdav remote:path [flags] --no-seek Don\[aq]t allow seeking in files --pass string Password for authentication --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) - --read-only Mount read-only + --read-only Only allow read-only access --realm string Realm for authentication (default \[dq]rclone\[dq]) --server-read-timeout duration Timeout for server reading data (default 1h0m0s) --server-write-timeout duration Timeout for server writing data (default 1h0m0s) @@ -9239,6 +9862,8 @@ rclone serve webdav remote:path [flags] --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) @@ -9361,6 +9986,9 @@ histogram of file name characters. rclone test info (https://rclone.org/commands/rclone_test_info/) - Discovers file name or other limitations for paths. .IP \[bu] 2 +rclone test makefile (https://rclone.org/commands/rclone_test_makefile/) +- Make files with random contents of the size given +.IP \[bu] 2 rclone test makefiles (https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory @@ -9461,6 +10089,35 @@ not listed here. .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command +.SH rclone test makefile +.PP +Make files with random contents of the size given +.IP +.nf +\f[C] +rclone test makefile []+ [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + -h, --help help for makefile + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS SEE ALSO +.IP \[bu] 2 +rclone test (https://rclone.org/commands/rclone_test/) - Run a test +command .SH rclone test makefiles .PP Make a random file hierarchy in a directory @@ -9474,6 +10131,8 @@ rclone test makefiles [flags] .IP .nf \f[C] + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern --files int Number of files to create (default 1000) --files-per-directory int Average number of files per directory (default 10) -h, --help help for makefiles @@ -9481,7 +10140,10 @@ rclone test makefiles [flags] --max-name-length int Maximum size of file names (default 12) --min-file-size SizeSuffix Minimum size of file to create --min-name-length int Minimum size of file names (default 4) + --pattern Fill files with a periodic pattern --seed int Seed for the random number generator (0 for random) (default 1) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --zero Fill files with ASCII 0x00 \f[R] .fi .PP @@ -9595,13 +10257,17 @@ $ rclone tree remote:path .fi .PP You can use any of the filtering options with the tree command (e.g. ---include and --exclude). -You can also use --fast-list. +\f[C]--include\f[R] and \f[C]--exclude\f[R]. +You can also use \f[C]--fast-list\f[R]. .PP The tree command has many options for controlling the listing which are -compatible with the tree command. +compatible with the tree command, for example you can include file sizes +with \f[C]--size\f[R]. Note that not all of them have short options as they conflict with rclone\[aq]s short options. +.PP +For a more interactive navigation of the remote see the +ncdu (https://rclone.org/commands/rclone_ncdu/) command. .IP .nf \f[C] @@ -10057,6 +10723,215 @@ rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup \f[R] .fi +.SS Metadata support +.PP +Metadata is data about a file which isn\[aq]t the contents of the file. +Normally rclone only preserves the modification time and the content +(MIME) type where possible. +.PP +Rclone supports preserving all the available metadata on files (not +directories) when using the \f[C]--metadata\f[R] or \f[C]-M\f[R] flag. +.PP +Exactly what metadata is supported and what that support means depends +on the backend. +Backends that support metadata have a metadata section in their docs and +are listed in the features table (https://rclone.org/overview/#features) +(Eg local (https://rclone.org/local/#metadata), s3) +.PP +Rclone only supports a one-time sync of metadata. +This means that metadata will be synced from the source object to the +destination object only when the source object has changed and needs to +be re-uploaded. +If the metadata subsequently changes on the source object without +changing the object itself then it won\[aq]t be synced to the +destination object. +This is in line with the way rclone syncs \f[C]Content-Type\f[R] without +the \f[C]--metadata\f[R] flag. +.PP +Using \f[C]--metadata\f[R] when syncing from local to local will +preserve file attributes such as file mode, owner, extended attributes +(not Windows). +.PP +Note that arbitrary metadata may be added to objects using the +\f[C]--metadata-set key=value\f[R] flag when the object is first +uploaded. +This flag can be repeated as many times as necessary. +.SS Types of metadata +.PP +Metadata is divided into two type. +System metadata and User metadata. +.PP +Metadata which the backend uses itself is called system metadata. +For example on the local backend the system metadata \f[C]uid\f[R] will +store the user ID of the file when used on a unix based platform. +.PP +Arbitrary metadata is called user metadata and this can be set however +is desired. +.PP +When objects are copied from backend to backend, they will attempt to +interpret system metadata if it is supplied. +Metadata may change from being user metadata to system metadata as +objects are copied between different backends. +For example copying an object from s3 sets the \f[C]content-type\f[R] +metadata. +In a backend which understands this (like \f[C]azureblob\f[R]) this will +become the Content-Type of the object. +In a backend which doesn\[aq]t understand this (like the \f[C]local\f[R] +backend) this will become user metadata. +However should the local object be copied back to s3, the Content-Type +will be set correctly. +.SS Metadata framework +.PP +Rclone implements a metadata framework which can read metadata from an +object and write it to the object when (and only when) it is being +uploaded. +.PP +This metadata is stored as a dictionary with string keys and string +values. +.PP +There are some limits on the names of the keys (these may be clarified +further in the future). +.IP \[bu] 2 +must be lower case +.IP \[bu] 2 +may be \f[C]a-z\f[R] \f[C]0-9\f[R] containing \f[C].\f[R] \f[C]-\f[R] or +\f[C]_\f[R] +.IP \[bu] 2 +length is backend dependent +.PP +Each backend can provide system metadata that it understands. +Some backends can also store arbitrary user metadata. +.PP +Where possible the key names are standardized, so, for example, it is +possible to copy object metadata from s3 to azureblob for example and +metadata will be translated apropriately. +.PP +Some backends have limits on the size of the metadata and rclone will +give errors on upload if they are exceeded. +.SS Metadata preservation +.PP +The goal of the implementation is to +.IP "1." 3 +Preserve metadata if at all possible +.IP "2." 3 +Interpret metadata if at all possible +.PP +The consequences of 1 is that you can copy an S3 object to a local disk +then back to S3 losslessly. +Likewise you can copy a local file with file attributes and xattrs from +local disk to s3 and back again losslessly. +.PP +The consequence of 2 is that you can copy an S3 object with metadata to +Azureblob (say) and have the metadata appear on the Azureblob object +also. +.SS Standard system metadata +.PP +Here is a table of standard system metadata which, if appropriate, a +backend may implement. +.PP +.TS +tab(@); +lw(34.2n) lw(21.2n) lw(14.7n). +T{ +key +T}@T{ +description +T}@T{ +example +T} +_ +T{ +mode +T}@T{ +File type and mode: octal, unix style +T}@T{ +0100664 +T} +T{ +uid +T}@T{ +User ID of owner: decimal number +T}@T{ +500 +T} +T{ +gid +T}@T{ +Group ID of owner: decimal number +T}@T{ +500 +T} +T{ +rdev +T}@T{ +Device ID (if special file) => hexadecimal +T}@T{ +0 +T} +T{ +atime +T}@T{ +Time of last access: RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T} +T{ +mtime +T}@T{ +Time of last modification: RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T} +T{ +btime +T}@T{ +Time of file creation (birth): RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T} +T{ +cache-control +T}@T{ +Cache-Control header +T}@T{ +no-cache +T} +T{ +content-disposition +T}@T{ +Content-Disposition header +T}@T{ +inline +T} +T{ +content-encoding +T}@T{ +Content-Encoding header +T}@T{ +gzip +T} +T{ +content-language +T}@T{ +Content-Language header +T}@T{ +en-US +T} +T{ +content-type +T}@T{ +Content-Type header +T}@T{ +text/plain +T} +.TE +.PP +The metadata keys \f[C]mtime\f[R] and \f[C]content-type\f[R] will take +precedence if supplied in the metadata over reading the +\f[C]Content-Type\f[R] or modification time of the source object. +.PP +Hashes are not included in system metadata as there is a well defined +way of reading those already. .SS Options .PP Rclone has a number of options to control its behaviour. @@ -10325,13 +11200,27 @@ This means that all the info on the objects to transfer is held in memory before the transfers start. .SS --checkers=N .PP -The number of checkers to run in parallel. -Checkers do the equality checking of files during a sync. +Originally controlling just the number of file checkers to run in +parallel, e.g. +by \f[C]rclone copy\f[R]. +Now a fairly universal parallelism control used by \f[C]rclone\f[R] in +several places. +.PP +Note: checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. .PP The default is to run 8 checkers in parallel. +However, in case of slow-reacting backends you may need to lower (rather +than increase) this default by setting \f[C]--checkers\f[R] to 4 or less +threads. +This is especially advised if you are experiencing backend server +crashes during file checking phase (e.g. +on subsequent or top-up backups where little or no file copying is done +and checking takes up most of the time). +Increase this setting only with utmost care, while monitoring your +server health and file checking throughput. .SS -c, --checksum .PP Normally rclone will look at modification time and size of files to see @@ -10509,6 +11398,9 @@ Mode to run dedupe command in. One of \f[C]interactive\f[R], \f[C]skip\f[R], \f[C]first\f[R], \f[C]newest\f[R], \f[C]oldest\f[R], \f[C]rename\f[R]. The default is \f[C]interactive\f[R]. +.PD 0 +.P +.PD See the dedupe command for more information as to what these options mean. .SS --disable FEATURE,FEATURE,... @@ -10691,25 +11583,28 @@ For counts the SI standard notation is used, e.g. prefix \f[C]k\f[R] for kilo. Used with file counts, \f[C]1k\f[R] means 1000 files. .PP -The various list commands output raw numbers by default. +The various list (https://rclone.org/commands/rclone_ls/) commands +output raw numbers by default. Option \f[C]--human-readable\f[R] will make them output values in human-readable format instead (with the short unit prefix). .PP -The about command outputs human-readable by default, with a -command-specific option \f[C]--full\f[R] to output the raw numbers -instead. +The about (https://rclone.org/commands/rclone_about/) command outputs +human-readable by default, with a command-specific option +\f[C]--full\f[R] to output the raw numbers instead. .PP -Command size outputs both human-readable and raw numbers in the same -output. +Command size (https://rclone.org/commands/rclone_size/) outputs both +human-readable and raw numbers in the same output. .PP -The tree command also considers \f[C]--human-readable\f[R], but it will -not use the exact same notation as the other commands: It rounds to one -decimal, and uses single letter suffix, e.g. +The tree (https://rclone.org/commands/rclone_tree/) command also +considers \f[C]--human-readable\f[R], but it will not use the exact same +notation as the other commands: It rounds to one decimal, and uses +single letter suffix, e.g. \f[C]K\f[R] instead of \f[C]Ki\f[R]. The reason for this is that it relies on an external library. .PP -The interactive command ncdu shows human-readable by default, and -responds to key \f[C]u\f[R] for toggling human-readable format. +The interactive command ncdu (https://rclone.org/commands/rclone_ncdu/) +shows human-readable by default, and responds to key \f[C]u\f[R] for +toggling human-readable format. .SS --ignore-case-sync .PP Using this option will cause rclone to ignore the case of the files when @@ -10962,6 +11857,17 @@ Defaults to off. When the limit is reached all transfers will stop immediately. .PP Rclone will exit with exit code 8 if the transfer limit is reached. +.SS --metadata / -M +.PP +Setting this flag enables rclone to copy the metadata from the source to +the destination. +For local backends this is ownership, permissions, xattr etc. +See the #metadata for more info. +.SS --metadata-set key=value +.PP +Add metadata \f[C]key\f[R] = \f[C]value\f[R] when uploading. +This can be repeated as many times as required. +See the #metadata for more info. .SS --cutoff-mode=hard|soft|cautious .PP This modifies the behavior of \f[C]--max-transfer\f[R] Defaults to @@ -11634,6 +12540,9 @@ is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote. .PP The default is to run 4 file transfers in parallel. +.PP +Look at --multi-thread-streams if you would like to control single file +transfers. .SS -u, --update .PP This forces rclone to skip any files which exist on the destination and @@ -11709,6 +12618,10 @@ transferred and a small number of significant events. With \f[C]-vv\f[R] rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting. +.PP +When setting verbosity as an environment variable, use +\f[C]RCLONE_VERBOSE=1\f[R] or \f[C]RCLONE_VERBOSE=2\f[R] for +\f[C]-v\f[R] and \f[C]-vv\f[R] respectively. .SS -V, --version .PP Prints the version number @@ -11985,6 +12898,8 @@ For the filtering options .IP \[bu] 2 \f[C]--exclude-from\f[R] .IP \[bu] 2 +\f[C]--exclude-if-present\f[R] +.IP \[bu] 2 \f[C]--include\f[R] .IP \[bu] 2 \f[C]--include-from\f[R] @@ -12113,6 +13028,10 @@ variable setting. Or to always use the trash in drive \f[C]--drive-use-trash\f[R], set \f[C]RCLONE_DRIVE_USE_TRASH=true\f[R]. .PP +Verbosity is slightly different, the environment variable equivalent of +\f[C]--verbose\f[R] or \f[C]-v\f[R] is \f[C]RCLONE_VERBOSE=1\f[R], or +for \f[C]-vv\f[R], \f[C]RCLONE_VERBOSE=2\f[R]. +.PP The same parser is used for the options and the environment variables so they take exactly the same form. .PP @@ -12354,6 +13273,36 @@ Configuration file is stored at: Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use \f[C]rclone config file\f[R] on the remote box to find out where). +.SS Configuring using SSH Tunnel +.PP +Linux and MacOS users can utilize SSH Tunnel to redirect the headless +box port 53682 to local machine by using the following command: +.IP +.nf +\f[C] +ssh -L localhost:53682:localhost:53682 username\[at]remote_server +\f[R] +.fi +.PP +Then on the headless box run \f[C]rclone\f[R] config and answer +\f[C]Y\f[R] to the \f[C]Use auto config?\f[R] question. +.IP +.nf +\f[C] +\&... +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes (default) +n) No +y/n> y +\f[R] +.fi +.PP +Then copy and paste the auth url +\f[C]http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx\f[R] to the browser +on your local machine, complete the auth and it is done. .SH Filtering, includes and excludes .PP Filter flags determine which files rclone \f[C]sync\f[R], @@ -12528,6 +13477,10 @@ reference (https://golang.org/pkg/regexp/syntax/). Regular expressions should be enclosed in \f[C]{{\f[R] \f[C]}}\f[R]. They will match only the last path segment if the glob doesn\[aq]t start with \f[C]/\f[R] or the whole path name if it does. +Note that rclone does not attempt to parse the supplied regular +expression, meaning that using any regular expression filter will +prevent rclone from using directory filter rules, as it will instead +check every path against the supplied regular expression(s). .PP Here is how the \f[C]{{regexp}}\f[R] is transformed into an full regular expression to match the entire path: @@ -12842,10 +13795,14 @@ unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content. .PP +If any regular expression filters are in use, then no directory +recursion optimisation is possible, as rclone must check every path +against the supplied regular expression(s). +.PP Directory recursion optimisation occurs if either: .IP \[bu] 2 A source remote does not support the rclone \f[C]ListR\f[R] primitive. -local, sftp, Microsoft OneDrive and WebDav do not support +local, sftp, Microsoft OneDrive and WebDAV do not support \f[C]ListR\f[R]. Google Drive and most bucket type storage do. Full list (https://rclone.org/overview/#optional-features) @@ -13475,6 +14432,8 @@ Useful for debugging. The \f[C]--exclude-if-present\f[R] flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it. +The flag can be repeated to check for multiple file names, presence of +any of them will exclude the directory. .PP This flag has a priority over other filter flags. .PP @@ -13492,9 +14451,6 @@ dir1/dir2/dir3/.ignore .PP The command \f[C]rclone ls --exclude-if-present .ignore dir1\f[R] does not list \f[C]dir3\f[R], \f[C]file3\f[R] or \f[C].ignore\f[R]. -.PP -\f[C]--exclude-if-present\f[R] can only be used once in an rclone -command. .SS Common pitfalls .PP The most frequent filter support issues on the rclone @@ -13644,11 +14600,11 @@ forum (https://forum.rclone.org/). If rclone is run with the \f[C]--rc\f[R] flag then it starts an HTTP server which can be used to remote control rclone using its API. .PP -You can either use the rclone rc command to access the API or use HTTP +You can either use the rc command to access the API or use HTTP directly. .PP -If you just want to run a remote control then see the rcd -command (https://rclone.org/commands/rclone_rcd/). +If you just want to run a remote control then see the +rcd (https://rclone.org/commands/rclone_rcd/) command. .SS Supported parameters .SS --rc .PP @@ -13772,6 +14728,14 @@ The alternative is to use \f[C]--rc-user\f[R] and \f[C]--rc-pass\f[R] and use these credentials in the request. .PP Default Off. +.SS --rc-baseurl +.PP +Prefix for URLs. +.PP +Default is root +.SS --rc-template +.PP +User-specified template. .SS Accessing the remote control via the rclone rc command .PP Rclone itself implements the remote control protocol in its @@ -14258,8 +15222,8 @@ state - state to restart with - used with continue result - result to restart with - used with continue .RE .PP -See the config create -command (https://rclone.org/commands/rclone_config_create/) command for +See the config +create (https://rclone.org/commands/rclone_config_create/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] @@ -14269,8 +15233,8 @@ Parameters: .IP \[bu] 2 name - name of remote to delete .PP -See the config delete -command (https://rclone.org/commands/rclone_config_delete/) command for +See the config +delete (https://rclone.org/commands/rclone_config_delete/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] @@ -14280,9 +15244,8 @@ Returns a JSON object: - key: value .PP Where keys are remote names and values are the config parameters. .PP -See the config dump -command (https://rclone.org/commands/rclone_config_dump/) command for -more information on the above. +See the config dump (https://rclone.org/commands/rclone_config_dump/) +command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS config/get: Get a remote in the config file. @@ -14291,18 +15254,16 @@ Parameters: .IP \[bu] 2 name - name of remote to get .PP -See the config dump -command (https://rclone.org/commands/rclone_config_dump/) command for -more information on the above. +See the config dump (https://rclone.org/commands/rclone_config_dump/) +command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS config/listremotes: Lists the remotes in the config file. .PP Returns - remotes - array of remote names .PP -See the listremotes -command (https://rclone.org/commands/rclone_listremotes/) command for -more information on the above. +See the listremotes (https://rclone.org/commands/rclone_listremotes/) +command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS config/password: password the config for a remote. @@ -14313,8 +15274,8 @@ name - name of remote .IP \[bu] 2 parameters - a map of { \[dq]key\[dq]: \[dq]value\[dq] } pairs .PP -See the config password -command (https://rclone.org/commands/rclone_config_password/) command +See the config +password (https://rclone.org/commands/rclone_config_password/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] @@ -14322,8 +15283,8 @@ for more information on the above. .PP Returns a JSON object: - providers - array of objects .PP -See the config providers -command (https://rclone.org/commands/rclone_config_providers/) command +See the config +providers (https://rclone.org/commands/rclone_config_providers/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] @@ -14354,8 +15315,8 @@ state - state to restart with - used with continue result - result to restart with - used with continue .RE .PP -See the config update -command (https://rclone.org/commands/rclone_config_update/) command for +See the config +update (https://rclone.org/commands/rclone_config_update/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] @@ -14895,8 +15856,8 @@ fs - a remote name string e.g. .PP The result is as returned from rclone about --json .PP -See the about command (https://rclone.org/commands/rclone_size/) command -for more information on the above. +See the about (https://rclone.org/commands/rclone_about/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/cleanup: Remove trashed files in the remote or path @@ -14906,8 +15867,8 @@ This takes the following parameters: fs - a remote name string e.g. \[dq]drive:\[dq] .PP -See the cleanup command (https://rclone.org/commands/rclone_cleanup/) -command for more information on the above. +See the cleanup (https://rclone.org/commands/rclone_cleanup/) command +for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/copyfile: Copy a file from source remote to destination remote @@ -14940,9 +15901,10 @@ remote - a path within that remote e.g. url - string, URL to read from .IP \[bu] 2 autoFilename - boolean, set to true to retrieve destination file name -from url See the copyurl -command (https://rclone.org/commands/rclone_copyurl/) command for more -information on the above. +from url +.PP +See the copyurl (https://rclone.org/commands/rclone_copyurl/) command +for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/delete: Remove files in the path @@ -14952,8 +15914,8 @@ This takes the following parameters: fs - a remote name string e.g. \[dq]drive:\[dq] .PP -See the delete command (https://rclone.org/commands/rclone_delete/) -command for more information on the above. +See the delete (https://rclone.org/commands/rclone_delete/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/deletefile: Remove the single file pointed to @@ -14966,9 +15928,8 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .PP -See the deletefile -command (https://rclone.org/commands/rclone_deletefile/) command for -more information on the above. +See the deletefile (https://rclone.org/commands/rclone_deletefile/) +command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/fsinfo: Return information about the remote @@ -14983,46 +15944,103 @@ This returns info about the remote passed in; .nf \f[C] { - // optional features and whether they are available or not - \[dq]Features\[dq]: { - \[dq]About\[dq]: true, - \[dq]BucketBased\[dq]: false, - \[dq]CanHaveEmptyDirectories\[dq]: true, - \[dq]CaseInsensitive\[dq]: false, - \[dq]ChangeNotify\[dq]: false, - \[dq]CleanUp\[dq]: false, - \[dq]Copy\[dq]: false, - \[dq]DirCacheFlush\[dq]: false, - \[dq]DirMove\[dq]: true, - \[dq]DuplicateFiles\[dq]: false, - \[dq]GetTier\[dq]: false, - \[dq]ListR\[dq]: false, - \[dq]MergeDirs\[dq]: false, - \[dq]Move\[dq]: true, - \[dq]OpenWriterAt\[dq]: true, - \[dq]PublicLink\[dq]: false, - \[dq]Purge\[dq]: true, - \[dq]PutStream\[dq]: true, - \[dq]PutUnchecked\[dq]: false, - \[dq]ReadMimeType\[dq]: false, - \[dq]ServerSideAcrossConfigs\[dq]: false, - \[dq]SetTier\[dq]: false, - \[dq]SetWrapper\[dq]: false, - \[dq]UnWrap\[dq]: false, - \[dq]WrapFs\[dq]: false, - \[dq]WriteMimeType\[dq]: false - }, - // Names of hashes available - \[dq]Hashes\[dq]: [ - \[dq]MD5\[dq], - \[dq]SHA-1\[dq], - \[dq]DropboxHash\[dq], - \[dq]QuickXorHash\[dq] - ], - \[dq]Name\[dq]: \[dq]local\[dq], // Name as created - \[dq]Precision\[dq]: 1, // Precision of timestamps in ns - \[dq]Root\[dq]: \[dq]/\[dq], // Path as created - \[dq]String\[dq]: \[dq]Local file system at /\[dq] // how the remote will appear in logs + // optional features and whether they are available or not + \[dq]Features\[dq]: { + \[dq]About\[dq]: true, + \[dq]BucketBased\[dq]: false, + \[dq]BucketBasedRootOK\[dq]: false, + \[dq]CanHaveEmptyDirectories\[dq]: true, + \[dq]CaseInsensitive\[dq]: false, + \[dq]ChangeNotify\[dq]: false, + \[dq]CleanUp\[dq]: false, + \[dq]Command\[dq]: true, + \[dq]Copy\[dq]: false, + \[dq]DirCacheFlush\[dq]: false, + \[dq]DirMove\[dq]: true, + \[dq]Disconnect\[dq]: false, + \[dq]DuplicateFiles\[dq]: false, + \[dq]GetTier\[dq]: false, + \[dq]IsLocal\[dq]: true, + \[dq]ListR\[dq]: false, + \[dq]MergeDirs\[dq]: false, + \[dq]MetadataInfo\[dq]: true, + \[dq]Move\[dq]: true, + \[dq]OpenWriterAt\[dq]: true, + \[dq]PublicLink\[dq]: false, + \[dq]Purge\[dq]: true, + \[dq]PutStream\[dq]: true, + \[dq]PutUnchecked\[dq]: false, + \[dq]ReadMetadata\[dq]: true, + \[dq]ReadMimeType\[dq]: false, + \[dq]ServerSideAcrossConfigs\[dq]: false, + \[dq]SetTier\[dq]: false, + \[dq]SetWrapper\[dq]: false, + \[dq]Shutdown\[dq]: false, + \[dq]SlowHash\[dq]: true, + \[dq]SlowModTime\[dq]: false, + \[dq]UnWrap\[dq]: false, + \[dq]UserInfo\[dq]: false, + \[dq]UserMetadata\[dq]: true, + \[dq]WrapFs\[dq]: false, + \[dq]WriteMetadata\[dq]: true, + \[dq]WriteMimeType\[dq]: false + }, + // Names of hashes available + \[dq]Hashes\[dq]: [ + \[dq]md5\[dq], + \[dq]sha1\[dq], + \[dq]whirlpool\[dq], + \[dq]crc32\[dq], + \[dq]sha256\[dq], + \[dq]dropbox\[dq], + \[dq]mailru\[dq], + \[dq]quickxor\[dq] + ], + \[dq]Name\[dq]: \[dq]local\[dq], // Name as created + \[dq]Precision\[dq]: 1, // Precision of timestamps in ns + \[dq]Root\[dq]: \[dq]/\[dq], // Path as created + \[dq]String\[dq]: \[dq]Local file system at /\[dq], // how the remote will appear in logs + // Information about the system metadata for this backend + \[dq]MetadataInfo\[dq]: { + \[dq]System\[dq]: { + \[dq]atime\[dq]: { + \[dq]Help\[dq]: \[dq]Time of last access\[dq], + \[dq]Type\[dq]: \[dq]RFC 3339\[dq], + \[dq]Example\[dq]: \[dq]2006-01-02T15:04:05.999999999Z07:00\[dq] + }, + \[dq]btime\[dq]: { + \[dq]Help\[dq]: \[dq]Time of file birth (creation)\[dq], + \[dq]Type\[dq]: \[dq]RFC 3339\[dq], + \[dq]Example\[dq]: \[dq]2006-01-02T15:04:05.999999999Z07:00\[dq] + }, + \[dq]gid\[dq]: { + \[dq]Help\[dq]: \[dq]Group ID of owner\[dq], + \[dq]Type\[dq]: \[dq]decimal number\[dq], + \[dq]Example\[dq]: \[dq]500\[dq] + }, + \[dq]mode\[dq]: { + \[dq]Help\[dq]: \[dq]File type and mode\[dq], + \[dq]Type\[dq]: \[dq]octal, unix style\[dq], + \[dq]Example\[dq]: \[dq]0100664\[dq] + }, + \[dq]mtime\[dq]: { + \[dq]Help\[dq]: \[dq]Time of last modification\[dq], + \[dq]Type\[dq]: \[dq]RFC 3339\[dq], + \[dq]Example\[dq]: \[dq]2006-01-02T15:04:05.999999999Z07:00\[dq] + }, + \[dq]rdev\[dq]: { + \[dq]Help\[dq]: \[dq]Device ID (if special file)\[dq], + \[dq]Type\[dq]: \[dq]hexadecimal\[dq], + \[dq]Example\[dq]: \[dq]1abc\[dq] + }, + \[dq]uid\[dq]: { + \[dq]Help\[dq]: \[dq]User ID of owner\[dq], + \[dq]Type\[dq]: \[dq]decimal number\[dq], + \[dq]Example\[dq]: \[dq]500\[dq] + } + }, + \[dq]Help\[dq]: \[dq]Textual help string\[rs]n\[dq] + } } \f[R] .fi @@ -15064,6 +16082,8 @@ dirsOnly - If set only show directories .IP \[bu] 2 filesOnly - If set only show files .IP \[bu] 2 +metadata - If set return metadata of objects also +.IP \[bu] 2 hashTypes - array of strings of hash types to show if showHash set .RE .PP @@ -15075,7 +16095,7 @@ list This is an array of objects as described in the lsjson command .RE .PP -See the lsjson command (https://rclone.org/commands/rclone_lsjson/) for +See the lsjson (https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples. .PP \f[B]Authentication is required for this call.\f[R] @@ -15089,8 +16109,8 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .PP -See the mkdir command (https://rclone.org/commands/rclone_mkdir/) -command for more information on the above. +See the mkdir (https://rclone.org/commands/rclone_mkdir/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/movefile: Move a file from source remote to destination remote @@ -15130,8 +16150,8 @@ Returns: .IP \[bu] 2 url - URL of the resource .PP -See the link command (https://rclone.org/commands/rclone_link/) command -for more information on the above. +See the link (https://rclone.org/commands/rclone_link/) command for more +information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/purge: Remove a directory or container and all of its contents @@ -15144,8 +16164,8 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .PP -See the purge command (https://rclone.org/commands/rclone_purge/) -command for more information on the above. +See the purge (https://rclone.org/commands/rclone_purge/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/rmdir: Remove an empty directory or container @@ -15158,8 +16178,8 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .PP -See the rmdir command (https://rclone.org/commands/rclone_rmdir/) -command for more information on the above. +See the rmdir (https://rclone.org/commands/rclone_rmdir/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/rmdirs: Remove all the empty directories in the path @@ -15172,9 +16192,10 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .IP \[bu] 2 -leaveRoot - boolean, set to true not to delete the root See the rmdirs -command (https://rclone.org/commands/rclone_rmdirs/) command for more -information on the above. +leaveRoot - boolean, set to true not to delete the root +.PP +See the rmdirs (https://rclone.org/commands/rclone_rmdirs/) command for +more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/size: Count the number of bytes and files in remote @@ -15190,8 +16211,8 @@ count - number of files .IP \[bu] 2 bytes - number of bytes in those files .PP -See the size command (https://rclone.org/commands/rclone_size/) command -for more information on the above. +See the size (https://rclone.org/commands/rclone_size/) command for more +information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS operations/stat: Give information about the supplied file or directory @@ -15216,7 +16237,7 @@ Will be null if not found. Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options. .PP -See the lsjson command (https://rclone.org/commands/rclone_lsjson/) for +See the lsjson (https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples. .PP \f[B]Authentication is required for this call.\f[R] @@ -15230,9 +16251,10 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .IP \[bu] 2 -each part in body represents a file to be uploaded See the uploadfile -command (https://rclone.org/commands/rclone_uploadfile/) command for -more information on the above. +each part in body represents a file to be uploaded +.PP +See the uploadfile (https://rclone.org/commands/rclone_uploadfile/) +command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS options/blocks: List all the option blocks @@ -15487,8 +16509,8 @@ dstFs - a remote name string e.g. .IP \[bu] 2 createEmptySrcDirs - create empty src directories on destination if set .PP -See the copy command (https://rclone.org/commands/rclone_copy/) command -for more information on the above. +See the copy (https://rclone.org/commands/rclone_copy/) command for more +information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS sync/move: move a directory from source remote to destination remote @@ -15505,8 +16527,8 @@ createEmptySrcDirs - create empty src directories on destination if set .IP \[bu] 2 deleteEmptySrcDirs - delete empty src directories if set .PP -See the move command (https://rclone.org/commands/rclone_move/) command -for more information on the above. +See the move (https://rclone.org/commands/rclone_move/) command for more +information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS sync/sync: sync a directory from source remote to destination remote @@ -15521,8 +16543,8 @@ dstFs - a remote name string e.g. .IP \[bu] 2 createEmptySrcDirs - create empty src directories on destination if set .PP -See the sync command (https://rclone.org/commands/rclone_sync/) command -for more information on the above. +See the sync (https://rclone.org/commands/rclone_sync/) command for more +information on the above. .PP \f[B]Authentication is required for this call.\f[R] .SS vfs/forget: Forget files or directories in the directory cache. @@ -15943,7 +16965,7 @@ Here is an overview of the major features of each cloud storage system. .PP .TS tab(@); -l c c c c c. +l c c c c c c. T{ Name T}@T{ @@ -15956,6 +16978,8 @@ T}@T{ Duplicate Files T}@T{ MIME Type +T}@T{ +Metadata T} _ T{ @@ -15963,388 +16987,478 @@ T{ T}@T{ Whirlpool T}@T{ -No +- T}@T{ No T}@T{ Yes T}@T{ R +T}@T{ +- T} T{ Akamai Netstorage T}@T{ MD5, SHA256 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R +T}@T{ +- T} T{ Amazon Drive T}@T{ MD5 T}@T{ -No +- T}@T{ Yes T}@T{ No T}@T{ R +T}@T{ +- T} T{ Amazon S3 (or S3 compatible) T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +RWU T} T{ Backblaze B2 T}@T{ SHA1 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ Box T}@T{ SHA1 T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ - +T}@T{ +- T} T{ Citrix ShareFile T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ - +T}@T{ +- T} T{ Dropbox T}@T{ DBHASH \[S1] T}@T{ -Yes +R T}@T{ Yes T}@T{ No T}@T{ - +T}@T{ +- T} T{ Enterprise File Fabric T}@T{ - T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ FTP T}@T{ - T}@T{ -No +R/W \[S1]\[u2070] T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- T} T{ Google Cloud Storage T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ Google Drive T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ Yes T}@T{ R/W +T}@T{ +- T} T{ Google Photos T}@T{ - T}@T{ -No +- T}@T{ No T}@T{ Yes T}@T{ R +T}@T{ +- T} T{ HDFS T}@T{ - T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- +T} +T{ +HiDrive +T}@T{ +HiDrive \[S1]\[S2] +T}@T{ +R/W +T}@T{ +No +T}@T{ +No +T}@T{ +- +T}@T{ +- T} T{ HTTP T}@T{ - T}@T{ -No +R T}@T{ No T}@T{ No T}@T{ R +T}@T{ +- T} T{ Hubic T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- +T} +T{ +Internet Archive +T}@T{ +MD5, SHA1, CRC32 +T}@T{ +R/W \[S1]\[S1] +T}@T{ +No +T}@T{ +No +T}@T{ +- +T}@T{ +RWU T} T{ Jottacloud T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ R +T}@T{ +- T} T{ Koofr T}@T{ MD5 T}@T{ -No +- T}@T{ Yes T}@T{ No T}@T{ - +T}@T{ +- T} T{ Mail.ru Cloud T}@T{ Mailru \[u2076] T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ - +T}@T{ +- T} T{ Mega T}@T{ - T}@T{ -No +- T}@T{ No T}@T{ Yes T}@T{ - +T}@T{ +- T} T{ Memory T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- T} T{ Microsoft Azure Blob Storage T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ Microsoft OneDrive T}@T{ SHA1 \[u2075] T}@T{ -Yes +R/W T}@T{ Yes T}@T{ No T}@T{ R +T}@T{ +- T} T{ OpenDrive T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ Yes T}@T{ Partial \[u2078] T}@T{ - +T}@T{ +- T} T{ OpenStack Swift T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ pCloud T}@T{ MD5, SHA1 \[u2077] T}@T{ -Yes +R T}@T{ No T}@T{ No T}@T{ W +T}@T{ +- T} T{ premiumize.me T}@T{ - T}@T{ -No +- T}@T{ Yes T}@T{ No T}@T{ R +T}@T{ +- T} T{ put.io T}@T{ CRC-32 T}@T{ -Yes +R/W T}@T{ No T}@T{ Yes T}@T{ R +T}@T{ +- T} T{ QingStor T}@T{ MD5 T}@T{ -No +- \[u2079] T}@T{ No T}@T{ No T}@T{ R/W +T}@T{ +- T} T{ Seafile T}@T{ - T}@T{ +- +T}@T{ No T}@T{ No T}@T{ -No +- T}@T{ - T} @@ -16353,102 +17467,118 @@ SFTP T}@T{ MD5, SHA1 \[S2] T}@T{ -Yes +R/W T}@T{ Depends T}@T{ No T}@T{ - +T}@T{ +- T} T{ Sia T}@T{ - T}@T{ -No +- T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- T} T{ SugarSync T}@T{ - T}@T{ -No +- T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- T} T{ Storj T}@T{ - T}@T{ -Yes +R T}@T{ No T}@T{ No T}@T{ - +T}@T{ +- T} T{ Uptobox T}@T{ - T}@T{ -No +- T}@T{ No T}@T{ Yes T}@T{ - +T}@T{ +- T} T{ WebDAV T}@T{ MD5, SHA1 \[S3] T}@T{ -Yes \[u2074] +R \[u2074] T}@T{ Depends T}@T{ No T}@T{ - +T}@T{ +- T} T{ Yandex Disk T}@T{ MD5 T}@T{ -Yes +R/W T}@T{ No T}@T{ No T}@T{ R +T}@T{ +- T} T{ Zoho WorkDrive T}@T{ - T}@T{ +- +T}@T{ No T}@T{ No T}@T{ -No +- T}@T{ - T} @@ -16457,13 +17587,15 @@ The local filesystem T}@T{ All T}@T{ -Yes +R/W T}@T{ Depends T}@T{ No T}@T{ - +T}@T{ +RWU T} .TE .SS Notes @@ -16494,6 +17626,21 @@ their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with \f[C]rclone\f[R]. It may be that this is a mistake or an unsupported feature. +.PP +\[u2079] QingStor does not support SetModTime for objects bigger than 5 +GiB. +.PP +\[S1]\[u2070] FTP supports modtimes for the major FTP servers, and also +others if they advertised required protocol extensions. +See this (https://rclone.org/ftp/#modified-time) for more details. +.PP +\[S1]\[S1] Internet Archive requires option \f[C]wait_archive\f[R] to be +set to a non-zero value for full modtime support. +.PP +\[S1]\[S2] HiDrive supports its own custom +hash (https://static.hidrive.com/dev/0001). +It combines SHA1 sums for each 4 KiB block hierarchically to a single +top-level sum. .SS Hash .PP The cloud storage system supports various hash types of the objects. @@ -16505,14 +17652,41 @@ To use the verify checksums when transferring between cloud storage systems they must support a common hash type. .SS ModTime .PP -The cloud storage system supports setting modification times on objects. -If it does then this enables a using the modification times as part of -the sync. -If not then only the size will be checked by default, though the MD5SUM -can be checked with the \f[C]--checksum\f[R] flag. +Allmost all cloud storage systems store some sort of timestamp on +objects, but several of them not something that is appropriate to use +for syncing. +E.g. +some backends will only write a timestamp that represent the time of the +upload. +To be relevant for syncing it should be able to store the modification +time of the source object. +If this is not the case, rclone will only check the file size by +default, though can be configured to check the file hash (with the +\f[C]--checksum\f[R] flag). +Ideally it should also be possible to change the timestamp of an +existing file without having to re-upload it. .PP -All cloud storage systems support some kind of date on the object and -these will be set when transferring from the cloud storage system. +Storage systems with a \f[C]-\f[R] in the ModTime column, means the +modification read on objects is not the modification time of the file +when uploaded. +It is most likely the time the file was uploaded, or possibly something +else (like the time the picture was taken in Google Photos). +.PP +Storage systems with a \f[C]R\f[R] (for read-only) in the ModTime +column, means the it keeps modification times on objects, and updates +them when uploading objects, but it does not support changing only the +modification time (\f[C]SetModTime\f[R] operation) without re-uploading, +possibly not even without deleting existing first. +Some operations in rclone, such as \f[C]copy\f[R] and \f[C]sync\f[R] +commands, will automatically check for \f[C]SetModTime\f[R] support and +re-upload if necessary to keep the modification times in sync. +Other commands will not work without \f[C]SetModTime\f[R] support, e.g. +\f[C]touch\f[R] command on an existing file will fail, and changes to +modification time only on a files in a \f[C]mount\f[R] will be silently +ignored. +.PP +Storage systems with \f[C]R/W\f[R] (for read/write) in the ModTime +column, means they do also support modtime-only operations. .SS Case Insensitive .PP If a cloud storage systems is case sensitive then it is possible to have @@ -16972,148 +18146,212 @@ defaults for the backends. .PP .TS tab(@); -l l. +lw(21.7n) lw(24.1n) lw(24.1n). T{ Encoding T}@T{ Characters +T}@T{ +Encoded as T} _ T{ Asterisk T}@T{ \f[C]*\f[R] +T}@T{ +\f[C]\[uFF0A]\f[R] T} T{ BackQuote T}@T{ \f[C]\[ga]\f[R] +T}@T{ +\f[C]\[uFF40]\f[R] T} T{ BackSlash T}@T{ \f[C]\[rs]\f[R] +T}@T{ +\f[C]\[uFF3C]\f[R] T} T{ Colon T}@T{ \f[C]:\f[R] +T}@T{ +\f[C]\[uFF1A]\f[R] T} T{ CrLf T}@T{ CR 0x0D, LF 0x0A +T}@T{ +\f[C]\[u240D]\f[R], \f[C]\[u240A]\f[R] T} T{ Ctl T}@T{ All control characters 0x00-0x1F +T}@T{ +\f[C]\[u2400]\[u2401]\[u2402]\[u2403]\[u2404]\[u2405]\[u2406]\[u2407]\[u2408]\[u2409]\[u240A]\[u240B]\[u240C]\[u240D]\[u240E]\[u240F]\[u2410]\[u2411]\[u2412]\[u2413]\[u2414]\[u2415]\[u2416]\[u2417]\[u2418]\[u2419]\[u241A]\[u241B]\[u241C]\[u241D]\[u241E]\[u241F]\f[R] T} T{ Del T}@T{ DEL 0x7F +T}@T{ +\f[C]\[u2421]\f[R] T} T{ Dollar T}@T{ \f[C]$\f[R] +T}@T{ +\f[C]\[uFF04]\f[R] T} T{ Dot T}@T{ \f[C].\f[R] or \f[C]..\f[R] as entire string +T}@T{ +\f[C]\[uFF0E]\f[R], \f[C]\[uFF0E]\[uFF0E]\f[R] T} T{ DoubleQuote T}@T{ \f[C]\[dq]\f[R] +T}@T{ +\f[C]\[uFF02]\f[R] T} T{ Hash T}@T{ \f[C]#\f[R] +T}@T{ +\f[C]\[uFF03]\f[R] T} T{ InvalidUtf8 T}@T{ An invalid UTF-8 character (e.g. latin1) +T}@T{ +\f[C]\[uFFFD]\f[R] T} T{ LeftCrLfHtVt T}@T{ -CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string +CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the left of a string +T}@T{ +\f[C]\[u240D]\f[R], \f[C]\[u240A]\f[R], \f[C]\[u2409]\f[R], +\f[C]\[u240B]\f[R] T} T{ LeftPeriod T}@T{ \f[C].\f[R] on the left of a string +T}@T{ +\f[C].\f[R] T} T{ LeftSpace T}@T{ SPACE on the left of a string +T}@T{ +\f[C]\[u2420]\f[R] T} T{ LeftTilde T}@T{ \f[C]\[ti]\f[R] on the left of a string +T}@T{ +\f[C]\[uFF5E]\f[R] T} T{ LtGt T}@T{ \f[C]<\f[R], \f[C]>\f[R] +T}@T{ +\f[C]\[uFF1C]\f[R], \f[C]\[uFF1E]\f[R] T} T{ None T}@T{ No characters are encoded +T}@T{ T} T{ Percent T}@T{ \f[C]%\f[R] +T}@T{ +\f[C]\[uFF05]\f[R] T} T{ Pipe T}@T{ | +T}@T{ +\f[C]\[uFF5C]\f[R] T} T{ Question T}@T{ \f[C]?\f[R] +T}@T{ +\f[C]\[uFF1F]\f[R] T} T{ RightCrLfHtVt T}@T{ CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string +T}@T{ +\f[C]\[u240D]\f[R], \f[C]\[u240A]\f[R], \f[C]\[u2409]\f[R], +\f[C]\[u240B]\f[R] T} T{ RightPeriod T}@T{ \f[C].\f[R] on the right of a string +T}@T{ +\f[C].\f[R] T} T{ RightSpace T}@T{ SPACE on the right of a string +T}@T{ +\f[C]\[u2420]\f[R] +T} +T{ +Semicolon +T}@T{ +\f[C];\f[R] +T}@T{ +\f[C]\[uFF1B]\f[R] T} T{ SingleQuote T}@T{ \f[C]\[aq]\f[R] +T}@T{ +\f[C]\[uFF07]\f[R] T} T{ Slash T}@T{ \f[C]/\f[R] +T}@T{ +\f[C]\[uFF0F]\f[R] T} T{ SquareBracket T}@T{ \f[C][\f[R], \f[C]]\f[R] +T}@T{ +\f[C]\[uFF3B]\f[R], \f[C]\[uFF3D]\f[R] T} .TE .SS Encoding example: FTP @@ -17215,6 +18453,41 @@ a remote which supports writing (\f[C]W\f[R]) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type. +.SS Metadata +.PP +Backends may or may support reading or writing metadata. +They may support reading and writing system metadata (metadata intrinsic +to that backend) and/or user metadata (general purpose metadata). +.PP +The levels of metadata support are +.PP +.TS +tab(@); +l l. +T{ +Key +T}@T{ +Explanation +T} +_ +T{ +\f[C]R\f[R] +T}@T{ +Read only System Metadata +T} +T{ +\f[C]RW\f[R] +T}@T{ +Read and write System Metadata +T} +T{ +\f[C]RWU\f[R] +T}@T{ +Read and write System Metadata and read and write User Metadata +T} +.TE +.PP +See the metadata docs (https://rclone.org/docs/#metadata) for more info. .SS Optional Features .PP All rclone remotes support a base command set. @@ -17271,6 +18544,29 @@ T}@T{ Yes T} T{ +Akamai Netstorage +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ Amazon Drive T}@T{ Yes @@ -17294,7 +18590,7 @@ T}@T{ Yes T} T{ -Amazon S3 +Amazon S3 (or S3 compatible) T}@T{ No T}@T{ @@ -17547,6 +18843,29 @@ T}@T{ Yes T} T{ +HiDrive +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ HTTP T}@T{ No @@ -17593,6 +18912,29 @@ T}@T{ No T} T{ +Internet Archive +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T} +T{ Jottacloud T}@T{ Yes @@ -17616,6 +18958,29 @@ T}@T{ Yes T} T{ +Koofr +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T} +T{ Mail.ru Cloud T}@T{ Yes @@ -17915,6 +19280,29 @@ T}@T{ Yes T} T{ +Sia +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ SugarSync T}@T{ Yes @@ -18197,6 +19585,7 @@ These flags are available for every command. --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -18206,7 +19595,7 @@ These flags are available for every command. --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) - --exclude-if-present string Exclude directories if filename is present + --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --files-from stringArray Read list of source-file names from file (use - to read from stdin) @@ -18245,6 +19634,8 @@ These flags are available for every command. --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -18316,7 +19707,7 @@ These flags are available for every command. --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.58.0\[dq]) + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.0\[dq]) -v, --verbose count Print lots more stuff (repeat for more) \f[R] .fi @@ -18372,6 +19763,7 @@ They control the backends and may be set in the config file. --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) + --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL @@ -18411,6 +19803,7 @@ They control the backends and may be set in the config file. --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default \[dq]md5\[dq]) --chunker-remote string Remote to chunk/unchunk + --combine-upstreams SpaceSepList Upstreams for combining --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default \[dq]gzip\[dq]) --compress-ram-cache-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size (default 20Mi) @@ -18443,6 +19836,7 @@ They control the backends and may be set in the config file. --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) + --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs @@ -18498,6 +19892,7 @@ They control the backends and may be set in the config file. --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) + --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-host string FTP host to connect to @@ -18516,8 +19911,10 @@ They control the backends and may be set in the config file. --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret + --gcs-decompress If set this will decompress gzip encoded objects --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-location string Location for the newly created buckets + --gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path @@ -18543,10 +19940,24 @@ They control the backends and may be set in the config file. --hdfs-namenode string Hadoop name node and port --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name + --hidrive-auth-url string Auth server URL + --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) + --hidrive-client-id string OAuth Client Id + --hidrive-client-secret string OAuth Client Secret + --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary + --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot) + --hidrive-endpoint string Endpoint for the service (default \[dq]https://api.hidrive.strato.com/2.1\[dq]) + --hidrive-root-prefix string The root/parent folder for all paths (default \[dq]/\[dq]) + --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default \[dq]rw\[dq]) + --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default \[dq]user\[dq]) + --hidrive-token string OAuth Access Token as a JSON blob + --hidrive-token-url string Token server url + --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) + --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don\[aq]t use HEAD requests --http-no-slash Set this if the site doesn\[aq]t end directories with / - --http-url string URL of http host to connect to + --http-url string URL of HTTP host to connect to --hubic-auth-url string Auth server URL --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) --hubic-client-id string OAuth Client Id @@ -18555,6 +19966,13 @@ They control the backends and may be set in the config file. --hubic-no-chunk Don\[aq]t chunk files during streaming upload --hubic-token string OAuth Access Token as a JSON blob --hubic-token-url string Token server url + --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-disable-checksum Don\[aq]t ask the server to test against MD5 checksum calculated by rclone (default true) + --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) + --internetarchive-endpoint string IAS3 Endpoint (default \[dq]https://s3.us.archive.org\[dq]) + --internetarchive-front-endpoint string Host of InternetArchive Frontend (default \[dq]https://archive.org\[dq]) + --internetarchive-secret-access-key string IAS3 Secret Key (password) + --internetarchive-wait-archive Duration Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -18576,7 +19994,7 @@ They control the backends and may be set in the config file. --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads - --local-nounc string Disable UNC (long path names) conversion on Windows + --local-nounc Disable UNC (long path names) conversion on Windows --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) @@ -18597,11 +20015,11 @@ They control the backends and may be set in the config file. --netstorage-protocol string Select between HTTP or HTTPS protocol (default \[dq]https\[dq]) --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) -x, --one-file-system Don\[aq]t cross filesystem boundaries (unix/macOS only) + --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret - --onedrive-disable-site-permission Disable the request for Sites.Read.All permission --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -18625,9 +20043,11 @@ They control the backends and may be set in the config file. --pcloud-client-secret string OAuth Client Secret --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default \[dq]api.pcloud.com\[dq]) + --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default \[dq]d0\[dq]) --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url + --pcloud-username string Your pcloud username --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --qingstor-access-key-id string QingStor Access Key ID @@ -18680,6 +20100,7 @@ They control the backends and may be set in the config file. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) + --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --seafile-2fa Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn\[aq]t exist @@ -18690,6 +20111,8 @@ They control the backends and may be set in the config file. --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --sftp-ask-password Allow asking for SFTP password when needed + --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) + --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-disable-concurrent-reads If set don\[aq]t use concurrent reads --sftp-disable-concurrent-writes If set don\[aq]t use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -18702,12 +20125,14 @@ They control the backends and may be set in the config file. --sftp-known-hosts-file string Optional path to known_hosts file --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) - --sftp-path-override string Override path used by SSH connection + --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host + --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes + --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default \[dq]sftp\[dq]) --sftp-use-fstat If set use fstat instead of stat @@ -18764,6 +20189,7 @@ They control the backends and may be set in the config file. --union-action-policy string Policy to choose upstream on ACTION category (default \[dq]epall\[dq]) --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default \[dq]epmfs\[dq]) + --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default \[dq]ff\[dq]) --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token @@ -18775,7 +20201,7 @@ They control the backends and may be set in the config file. --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using + --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret @@ -19629,7 +21055,7 @@ Optional Flags: .PP Arbitrary rclone flags may be specified on the bisync command line (https://rclone.org/commands/rclone_bisync/), for example -\f[C]rclone bsync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s\f[R] +\f[C]rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s\f[R] Note that interactions of various rclone flags with bisync process flow has not been fully tested yet. .SS Paths @@ -20030,7 +21456,7 @@ aborted run (requires a \f[C]--resync\f[R] to recover). .PP Bisync is considered \f[I]BETA\f[R] and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - -OneDrive - S3 - SFTP +OneDrive - S3 - SFTP - Yandex Disk .PP It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we\[aq]ll update the @@ -20452,8 +21878,8 @@ consider using the flag Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with -\f[C].xlsx\f[R] extension, for example), it\[aq]s not possible to import -a normal file back into a Google document. +\f[C].xlsx\f[R] extension, for example), it is not possible to import a +normal file back into a Google document. .PP Bisync\[aq]s handling of Google Doc files is to flag them in the run log output for user\[aq]s attention and ignore them for any file transfers, @@ -21160,7 +22586,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to fichier (1Fichier). +Here are the Standard options specific to fichier (1Fichier). .SS --fichier-api-key .PP Your API Key, get it from https://1fichier.com/console/params.pl. @@ -21176,7 +22602,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to fichier (1Fichier). +Here are the Advanced options specific to fichier (1Fichier). .SS --fichier-shared-folder .PP If you want to download a shared folder, add this parameter. @@ -21249,7 +22675,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Alias .PP @@ -21360,7 +22786,7 @@ rclone copy /home/source remote:source .fi .SS Standard options .PP -Here are the standard options specific to alias (Alias for an existing +Here are the Standard options specific to alias (Alias for an existing remote). .SS --alias-remote .PP @@ -21576,7 +23002,7 @@ Your \f[C]amazon.co.uk\f[R] email and password should work here just fine. .SS Standard options .PP -Here are the standard options specific to amazon cloud drive (Amazon +Here are the Standard options specific to amazon cloud drive (Amazon Drive). .SS --acd-client-id .PP @@ -21610,7 +23036,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to amazon cloud drive (Amazon +Here are the Advanced options specific to amazon cloud drive (Amazon Drive). .SS --acd-token .PP @@ -21773,7 +23199,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Amazon S3 Storage Providers .PP @@ -21785,12 +23211,22 @@ Alibaba Cloud (Aliyun) Object Storage System (OSS) .IP \[bu] 2 Ceph .IP \[bu] 2 +China Mobile Ecloud Elastic Object Storage (EOS) +.IP \[bu] 2 +Cloudflare R2 +.IP \[bu] 2 +Arvan Cloud Object Storage (AOS) +.IP \[bu] 2 DigitalOcean Spaces .IP \[bu] 2 Dreamhost .IP \[bu] 2 +Huawei OBS +.IP \[bu] 2 IBM COS S3 .IP \[bu] 2 +IDrive e2 +.IP \[bu] 2 Minio .IP \[bu] 2 RackCorp Object Storage @@ -21876,7 +23312,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, Dreamhost, IBM COS, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -22500,10 +23936,11 @@ A simple solution is to set the \f[C]--s3-upload-cutoff 0\f[R] and force all the files to be uploaded as multipart. .SS Standard options .PP -Here are the standard options specific to s3 (Amazon S3 Compliant -Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, -Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent -COS). +Here are the Standard options specific to s3 (Amazon S3 Compliant +Storage Providers including AWS, Alibaba, Ceph, China Mobile, +Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, +IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, +StackPath, Storj, Tencent COS and Wasabi). .SS --s3-provider .PP Choose your S3 provider. @@ -22539,6 +23976,24 @@ Alibaba Cloud Object Storage System (OSS) formerly Aliyun Ceph Object Storage .RE .IP \[bu] 2 +\[dq]ChinaMobile\[dq] +.RS 2 +.IP \[bu] 2 +China Mobile Ecloud Elastic Object Storage (EOS) +.RE +.IP \[bu] 2 +\[dq]Cloudflare\[dq] +.RS 2 +.IP \[bu] 2 +Cloudflare R2 Storage +.RE +.IP \[bu] 2 +\[dq]ArvanCloud\[dq] +.RS 2 +.IP \[bu] 2 +Arvan Cloud Object Storage (AOS) +.RE +.IP \[bu] 2 \[dq]DigitalOcean\[dq] .RS 2 .IP \[bu] 2 @@ -22551,12 +24006,24 @@ Digital Ocean Spaces Dreamhost DreamObjects .RE .IP \[bu] 2 +\[dq]HuaweiOBS\[dq] +.RS 2 +.IP \[bu] 2 +Huawei Object Storage Service +.RE +.IP \[bu] 2 \[dq]IBMCOS\[dq] .RS 2 .IP \[bu] 2 IBM COS S3 .RE .IP \[bu] 2 +\[dq]IDrive\[dq] +.RS 2 +.IP \[bu] 2 +IDrive e2 +.RE +.IP \[bu] 2 \[dq]LyveCloud\[dq] .RS 2 .IP \[bu] 2 @@ -23070,6 +24537,149 @@ Amsterdam, The Netherlands .IP \[bu] 2 Paris, France .RE +.IP \[bu] 2 +\[dq]pl-waw\[dq] +.RS 2 +.IP \[bu] 2 +Warsaw, Poland +.RE +.RE +.SS --s3-region +.PP +Region to connect to. +- the location where your bucket will be created and your data stored. +Need bo be same with your endpoint. +.PP +Properties: +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_S3_REGION +.IP \[bu] 2 +Provider: HuaweiOBS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]af-south-1\[dq] +.RS 2 +.IP \[bu] 2 +AF-Johannesburg +.RE +.IP \[bu] 2 +\[dq]ap-southeast-2\[dq] +.RS 2 +.IP \[bu] 2 +AP-Bangkok +.RE +.IP \[bu] 2 +\[dq]ap-southeast-3\[dq] +.RS 2 +.IP \[bu] 2 +AP-Singapore +.RE +.IP \[bu] 2 +\[dq]cn-east-3\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai1 +.RE +.IP \[bu] 2 +\[dq]cn-east-2\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai2 +.RE +.IP \[bu] 2 +\[dq]cn-north-1\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing1 +.RE +.IP \[bu] 2 +\[dq]cn-north-4\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing4 +.RE +.IP \[bu] 2 +\[dq]cn-south-1\[dq] +.RS 2 +.IP \[bu] 2 +CN South-Guangzhou +.RE +.IP \[bu] 2 +\[dq]ap-southeast-1\[dq] +.RS 2 +.IP \[bu] 2 +CN-Hong Kong +.RE +.IP \[bu] 2 +\[dq]sa-argentina-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Buenos Aires1 +.RE +.IP \[bu] 2 +\[dq]sa-peru-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Lima1 +.RE +.IP \[bu] 2 +\[dq]na-mexico-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Mexico City1 +.RE +.IP \[bu] 2 +\[dq]sa-chile-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Santiago2 +.RE +.IP \[bu] 2 +\[dq]sa-brazil-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Sao Paulo1 +.RE +.IP \[bu] 2 +\[dq]ru-northwest-2\[dq] +.RS 2 +.IP \[bu] 2 +RU-Moscow2 +.RE +.RE +.SS --s3-region +.PP +Region to connect to. +.PP +Properties: +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_S3_REGION +.IP \[bu] 2 +Provider: Cloudflare +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]auto\[dq] +.RS 2 +.IP \[bu] 2 +R2 buckets are automatically distributed across Cloudflare\[aq]s data +centers for low latency. +.RE .RE .SS --s3-region .PP @@ -23084,7 +24694,8 @@ Config: region .IP \[bu] 2 Env Var: RCLONE_S3_REGION .IP \[bu] 2 -Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS +Provider: +!AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -23129,6 +24740,240 @@ Type: string Required: false .SS --s3-endpoint .PP +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Provider: ChinaMobile +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]eos-wuxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint - a good choice if you are unsure. +.IP \[bu] 2 +East China (Suzhou) +.RE +.IP \[bu] 2 +\[dq]eos-jinan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Jinan) +.RE +.IP \[bu] 2 +\[dq]eos-ningbo-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Hangzhou) +.RE +.IP \[bu] 2 +\[dq]eos-shanghai-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Shanghai-1) +.RE +.IP \[bu] 2 +\[dq]eos-zhengzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Zhengzhou) +.RE +.IP \[bu] 2 +\[dq]eos-hunan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-1) +.RE +.IP \[bu] 2 +\[dq]eos-zhuzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-2) +.RE +.IP \[bu] 2 +\[dq]eos-guangzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-2) +.RE +.IP \[bu] 2 +\[dq]eos-dongguan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-3) +.RE +.IP \[bu] 2 +\[dq]eos-beijing-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-1) +.RE +.IP \[bu] 2 +\[dq]eos-beijing-2.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-2) +.RE +.IP \[bu] 2 +\[dq]eos-beijing-4.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-3) +.RE +.IP \[bu] 2 +\[dq]eos-huhehaote-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Huhehaote) +.RE +.IP \[bu] 2 +\[dq]eos-chengdu-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chengdu) +.RE +.IP \[bu] 2 +\[dq]eos-chongqing-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chongqing) +.RE +.IP \[bu] 2 +\[dq]eos-guiyang-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Guiyang) +.RE +.IP \[bu] 2 +\[dq]eos-xian-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Nouthwest China (Xian) +.RE +.IP \[bu] 2 +\[dq]eos-yunnan.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming) +.RE +.IP \[bu] 2 +\[dq]eos-yunnan-2.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming-2) +.RE +.IP \[bu] 2 +\[dq]eos-tianjin-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Tianjin China (Tianjin) +.RE +.IP \[bu] 2 +\[dq]eos-jilin-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Jilin China (Changchun) +.RE +.IP \[bu] 2 +\[dq]eos-hubei-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Hubei China (Xiangyan) +.RE +.IP \[bu] 2 +\[dq]eos-jiangxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Jiangxi China (Nanchang) +.RE +.IP \[bu] 2 +\[dq]eos-gansu-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Gansu China (Lanzhou) +.RE +.IP \[bu] 2 +\[dq]eos-shanxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Shanxi China (Taiyuan) +.RE +.IP \[bu] 2 +\[dq]eos-liaoning-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Liaoning China (Shenyang) +.RE +.IP \[bu] 2 +\[dq]eos-hebei-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Hebei China (Shijiazhuang) +.RE +.IP \[bu] 2 +\[dq]eos-fujian-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Fujian China (Xiamen) +.RE +.IP \[bu] 2 +\[dq]eos-guangxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Guangxi China (Nanning) +.RE +.IP \[bu] 2 +\[dq]eos-anhui-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Anhui China (Huainan) +.RE +.RE +.SS --s3-endpoint +.PP +Endpoint for Arvan Cloud Object Storage (AOS) API. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Provider: ArvanCloud +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]s3.ir-thr-at1.arvanstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint - a good choice if you are unsure. +.IP \[bu] 2 +Tehran Iran (Asiatech) +.RE +.IP \[bu] 2 +\[dq]s3.ir-tbz-sh1.arvanstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +Tabriz Iran (Shahriar) +.RE +.RE +.SS --s3-endpoint +.PP Endpoint for IBM COS S3 API. .PP Specify if using an IBM COS On Premise. @@ -23691,6 +25536,115 @@ Middle East 1 (Dubai) .RE .SS --s3-endpoint .PP +Endpoint for OBS API. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Provider: HuaweiOBS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]obs.af-south-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AF-Johannesburg +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AP-Bangkok +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-3.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AP-Singapore +.RE +.IP \[bu] 2 +\[dq]obs.cn-east-3.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai1 +.RE +.IP \[bu] 2 +\[dq]obs.cn-east-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai2 +.RE +.IP \[bu] 2 +\[dq]obs.cn-north-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing1 +.RE +.IP \[bu] 2 +\[dq]obs.cn-north-4.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing4 +.RE +.IP \[bu] 2 +\[dq]obs.cn-south-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN South-Guangzhou +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN-Hong Kong +.RE +.IP \[bu] 2 +\[dq]obs.sa-argentina-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Buenos Aires1 +.RE +.IP \[bu] 2 +\[dq]obs.sa-peru-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Lima1 +.RE +.IP \[bu] 2 +\[dq]obs.na-mexico-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Mexico City1 +.RE +.IP \[bu] 2 +\[dq]obs.sa-chile-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Santiago2 +.RE +.IP \[bu] 2 +\[dq]obs.sa-brazil-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Sao Paulo1 +.RE +.IP \[bu] 2 +\[dq]obs.ru-northwest-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +RU-Moscow2 +.RE +.RE +.SS --s3-endpoint +.PP Endpoint for Scaleway Object Storage. .PP Properties: @@ -23719,6 +25673,12 @@ Amsterdam Endpoint .IP \[bu] 2 Paris Endpoint .RE +.IP \[bu] 2 +\[dq]s3.pl-waw.scw.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Warsaw Endpoint +.RE .RE .SS --s3-endpoint .PP @@ -24073,7 +26033,7 @@ Config: endpoint Env Var: RCLONE_S3_ENDPOINT .IP \[bu] 2 Provider: -!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp +!AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -24159,6 +26119,12 @@ Wasabi AP Northeast 1 (Tokyo) endpoint .IP \[bu] 2 Wasabi AP Northeast 2 (Osaka) endpoint .RE +.IP \[bu] 2 +\[dq]s3.ir-thr-at1.arvanstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +ArvanCloud Tehran Iran (Asiatech) endpoint +.RE .RE .SS --s3-location-constraint .PP @@ -24333,6 +26299,240 @@ AWS GovCloud (US) Region .RE .SS --s3-location-constraint .PP +Location constraint - must match endpoint. +.PP +Used when creating buckets only. +.PP +Properties: +.IP \[bu] 2 +Config: location_constraint +.IP \[bu] 2 +Env Var: RCLONE_S3_LOCATION_CONSTRAINT +.IP \[bu] 2 +Provider: ChinaMobile +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]wuxi1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Suzhou) +.RE +.IP \[bu] 2 +\[dq]jinan1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Jinan) +.RE +.IP \[bu] 2 +\[dq]ningbo1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Hangzhou) +.RE +.IP \[bu] 2 +\[dq]shanghai1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Shanghai-1) +.RE +.IP \[bu] 2 +\[dq]zhengzhou1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Zhengzhou) +.RE +.IP \[bu] 2 +\[dq]hunan1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-1) +.RE +.IP \[bu] 2 +\[dq]zhuzhou1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-2) +.RE +.IP \[bu] 2 +\[dq]guangzhou1\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-2) +.RE +.IP \[bu] 2 +\[dq]dongguan1\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-3) +.RE +.IP \[bu] 2 +\[dq]beijing1\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-1) +.RE +.IP \[bu] 2 +\[dq]beijing2\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-2) +.RE +.IP \[bu] 2 +\[dq]beijing4\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-3) +.RE +.IP \[bu] 2 +\[dq]huhehaote1\[dq] +.RS 2 +.IP \[bu] 2 +North China (Huhehaote) +.RE +.IP \[bu] 2 +\[dq]chengdu1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chengdu) +.RE +.IP \[bu] 2 +\[dq]chongqing1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chongqing) +.RE +.IP \[bu] 2 +\[dq]guiyang1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Guiyang) +.RE +.IP \[bu] 2 +\[dq]xian1\[dq] +.RS 2 +.IP \[bu] 2 +Nouthwest China (Xian) +.RE +.IP \[bu] 2 +\[dq]yunnan\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming) +.RE +.IP \[bu] 2 +\[dq]yunnan2\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming-2) +.RE +.IP \[bu] 2 +\[dq]tianjin1\[dq] +.RS 2 +.IP \[bu] 2 +Tianjin China (Tianjin) +.RE +.IP \[bu] 2 +\[dq]jilin1\[dq] +.RS 2 +.IP \[bu] 2 +Jilin China (Changchun) +.RE +.IP \[bu] 2 +\[dq]hubei1\[dq] +.RS 2 +.IP \[bu] 2 +Hubei China (Xiangyan) +.RE +.IP \[bu] 2 +\[dq]jiangxi1\[dq] +.RS 2 +.IP \[bu] 2 +Jiangxi China (Nanchang) +.RE +.IP \[bu] 2 +\[dq]gansu1\[dq] +.RS 2 +.IP \[bu] 2 +Gansu China (Lanzhou) +.RE +.IP \[bu] 2 +\[dq]shanxi1\[dq] +.RS 2 +.IP \[bu] 2 +Shanxi China (Taiyuan) +.RE +.IP \[bu] 2 +\[dq]liaoning1\[dq] +.RS 2 +.IP \[bu] 2 +Liaoning China (Shenyang) +.RE +.IP \[bu] 2 +\[dq]hebei1\[dq] +.RS 2 +.IP \[bu] 2 +Hebei China (Shijiazhuang) +.RE +.IP \[bu] 2 +\[dq]fujian1\[dq] +.RS 2 +.IP \[bu] 2 +Fujian China (Xiamen) +.RE +.IP \[bu] 2 +\[dq]guangxi1\[dq] +.RS 2 +.IP \[bu] 2 +Guangxi China (Nanning) +.RE +.IP \[bu] 2 +\[dq]anhui1\[dq] +.RS 2 +.IP \[bu] 2 +Anhui China (Huainan) +.RE +.RE +.SS --s3-location-constraint +.PP +Location constraint - must match endpoint. +.PP +Used when creating buckets only. +.PP +Properties: +.IP \[bu] 2 +Config: location_constraint +.IP \[bu] 2 +Env Var: RCLONE_S3_LOCATION_CONSTRAINT +.IP \[bu] 2 +Provider: ArvanCloud +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]ir-thr-at1\[dq] +.RS 2 +.IP \[bu] 2 +Tehran Iran (Asiatech) +.RE +.IP \[bu] 2 +\[dq]ir-tbz-sh1\[dq] +.RS 2 +.IP \[bu] 2 +Tabriz Iran (Shahriar) +.RE +.RE +.SS --s3-location-constraint +.PP Location constraint - must match endpoint when using IBM Cloud Public. .PP For on-prem COS, do not make a selection from this list, hit enter. @@ -24692,7 +26892,7 @@ Config: location_constraint Env Var: RCLONE_S3_LOCATION_CONSTRAINT .IP \[bu] 2 Provider: -!AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS +!AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -24716,7 +26916,7 @@ Config: acl .IP \[bu] 2 Env Var: RCLONE_S3_ACL .IP \[bu] 2 -Provider: !Storj +Provider: !Storj,Cloudflare .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -24843,7 +27043,7 @@ Config: server_side_encryption .IP \[bu] 2 Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION .IP \[bu] 2 -Provider: AWS,Ceph,Minio +Provider: AWS,Ceph,ChinaMobile,Minio .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -25019,6 +27219,74 @@ Infrequent access storage mode .RE .SS --s3-storage-class .PP +The storage class to use when storing new objects in ChinaMobile. +.PP +Properties: +.IP \[bu] 2 +Config: storage_class +.IP \[bu] 2 +Env Var: RCLONE_S3_STORAGE_CLASS +.IP \[bu] 2 +Provider: ChinaMobile +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +Default +.RE +.IP \[bu] 2 +\[dq]STANDARD\[dq] +.RS 2 +.IP \[bu] 2 +Standard storage class +.RE +.IP \[bu] 2 +\[dq]GLACIER\[dq] +.RS 2 +.IP \[bu] 2 +Archive storage mode +.RE +.IP \[bu] 2 +\[dq]STANDARD_IA\[dq] +.RS 2 +.IP \[bu] 2 +Infrequent access storage mode +.RE +.RE +.SS --s3-storage-class +.PP +The storage class to use when storing new objects in ArvanCloud. +.PP +Properties: +.IP \[bu] 2 +Config: storage_class +.IP \[bu] 2 +Env Var: RCLONE_S3_STORAGE_CLASS +.IP \[bu] 2 +Provider: ArvanCloud +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]STANDARD\[dq] +.RS 2 +.IP \[bu] 2 +Standard storage class +.RE +.RE +.SS --s3-storage-class +.PP The storage class to use when storing new objects in Tencent COS. .PP Properties: @@ -25103,10 +27371,11 @@ Prices are lower, but it needs to be restored first to be accessed. .RE .SS Advanced options .PP -Here are the advanced options specific to s3 (Amazon S3 Compliant -Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, -Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent -COS). +Here are the Advanced options specific to s3 (Amazon S3 Compliant +Storage Providers including AWS, Alibaba, Ceph, China Mobile, +Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, +IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, +StackPath, Storj, Tencent COS and Wasabi). .SS --s3-bucket-acl .PP Canned ACL used when creating buckets. @@ -25190,7 +27459,7 @@ Config: sse_customer_algorithm .IP \[bu] 2 Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM .IP \[bu] 2 -Provider: AWS,Ceph,Minio +Provider: AWS,Ceph,ChinaMobile,Minio .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -25222,7 +27491,7 @@ Config: sse_customer_key .IP \[bu] 2 Env Var: RCLONE_S3_SSE_CUSTOMER_KEY .IP \[bu] 2 -Provider: AWS,Ceph,Minio +Provider: AWS,Ceph,ChinaMobile,Minio .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -25251,7 +27520,7 @@ Config: sse_customer_key_md5 .IP \[bu] 2 Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5 .IP \[bu] 2 -Provider: AWS,Ceph,Minio +Provider: AWS,Ceph,ChinaMobile,Minio .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -25308,6 +27577,13 @@ stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size. .PP +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with \[dq]-P\[dq] flag. +Rclone treats chunk as sent when it\[aq]s buffered by the AWS SDK, when +in fact it may still be uploading. +A bigger chunk size means a bigger AWS SDK buffer and progress reporting +more deviating from the truth. +.PP Properties: .IP \[bu] 2 Config: chunk_size @@ -25777,6 +28053,141 @@ Env Var: RCLONE_S3_USE_MULTIPART_ETAG Type: Tristate .IP \[bu] 2 Default: unset +.SS --s3-use-presigned-request +.PP +Whether to use a presigned request or PutObject for single part uploads +.PP +If this is false rclone will use PutObject from the AWS SDK to upload an +object. +.PP +Versions of rclone < 1.59 use presigned requests to upload a single part +object and setting this flag to true will re-enable that functionality. +This shouldn\[aq]t be necessary except in exceptional circumstances or +for testing. +.PP +Properties: +.IP \[bu] 2 +Config: use_presigned_request +.IP \[bu] 2 +Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS Metadata +.PP +User metadata is stored as x-amz-meta- keys. +S3 metadata keys are case insensitive and are always returned in lower +case. +.PP +Here are the possible system metadata items for the s3 backend. +.PP +.TS +tab(@); +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). +T{ +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only +T} +_ +T{ +btime +T}@T{ +Time of file birth (creation) read from Last-Modified header +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T}@T{ +\f[B]Y\f[R] +T} +T{ +cache-control +T}@T{ +Cache-Control header +T}@T{ +string +T}@T{ +no-cache +T}@T{ +N +T} +T{ +content-disposition +T}@T{ +Content-Disposition header +T}@T{ +string +T}@T{ +inline +T}@T{ +N +T} +T{ +content-encoding +T}@T{ +Content-Encoding header +T}@T{ +string +T}@T{ +gzip +T}@T{ +N +T} +T{ +content-language +T}@T{ +Content-Language header +T}@T{ +string +T}@T{ +en-US +T}@T{ +N +T} +T{ +content-type +T}@T{ +Content-Type header +T}@T{ +string +T}@T{ +text/plain +T}@T{ +N +T} +T{ +mtime +T}@T{ +Time of last modification, read from rclone metadata +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T}@T{ +N +T} +T{ +tier +T}@T{ +Tier of the object +T}@T{ +string +T}@T{ +GLACIER +T}@T{ +\f[B]Y\f[R] +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SS Backend commands .PP Here are the commands specific to the s3 backend. @@ -25791,9 +28202,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -25982,10 +28392,14 @@ used for transferring bulk data back to AWS. Its main software interface is S3 object storage. .PP To use rclone with AWS Snowball Edge devices, configure as standard for -an \[aq]S3 Compatible Service\[aq] be sure to set -\f[C]upload_cutoff = 0\f[R] otherwise you will run into authentication -header issues as the snowball device does not support query parameter -based authentication. +an \[aq]S3 Compatible Service\[aq]. +.PP +If using rclone pre v1.59 be sure to set \f[C]upload_cutoff = 0\f[R] +otherwise you will run into authentication header issues as the snowball +device does not support query parameter based authentication. +.PP +With rclone v1.59 or later setting \f[C]upload_cutoff\f[R] should not be +necessary. .PP eg. .IP @@ -26027,11 +28441,11 @@ storage_class = \f[R] .fi .PP -If you are using an older version of CEPH, e.g. -10.2.x Jewel, then you may need to supply the parameter -\f[C]--s3-upload-cutoff 0\f[R] or put this in the config file as -\f[C]upload_cutoff 0\f[R] to work around a bug which causes uploading of -small files to fail. +If you are using an older version of CEPH (e.g. +10.2.x Jewel) and a version of rclone before v1.59 then you may need to +supply the parameter \f[C]--s3-upload-cutoff 0\f[R] or put this in the +config file as \f[C]upload_cutoff 0\f[R] to work around a bug which +causes uploading of small files to fail. .PP Note also that Ceph sometimes puts \f[C]/\f[R] in the passwords it gives users. @@ -26061,6 +28475,115 @@ removed). Because this is a json dump, it is encoding the \f[C]/\f[R] as \f[C]\[rs]/\f[R], so if you use the secret key as \f[C]xxxxxx/xxxx\f[R] it will work fine. +.SS Cloudflare R2 +.PP +Cloudflare R2 (https://blog.cloudflare.com/r2-open-beta/) Storage allows +developers to store large amounts of unstructured data without the +costly egress bandwidth fees associated with typical cloud storage +services. +.PP +Here is an example of making a Cloudflare R2 configuration. +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.PP +Note that all buckets are private, and all are stored in the same +\[dq]auto\[dq] region. +It is necessary to use Cloudflare workers to share the content of a +bucket publicly. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> r2 +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +\&... +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \[rs] (s3) +\&... +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +\&... +XX / Cloudflare R2 Storage + \[rs] (Cloudflare) +\&... +provider> Cloudflare +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> 1 +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / R2 buckets are automatically distributed across Cloudflare\[aq]s data centers for low latency. + \[rs] (auto) +region> 1 +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Enter a value. Press Enter to leave empty. +endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This will leave your config looking something like: +.IP +.nf +\f[C] +[r2] +type = s3 +provider = Cloudflare +access_key_id = ACCESS_KEY +secret_access_key = SECRET_ACCESS_KEY +region = auto +endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com +acl = private +\f[R] +.fi +.PP +Now run \f[C]rclone lsf r2:\f[R] to see your buckets and +\f[C]rclone lsf r2:bucket\f[R] to look within a bucket. .SS Dreamhost .PP Dreamhost DreamObjects (https://www.dreamhost.com/cloud/storage/) is an @@ -26151,6 +28674,142 @@ rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space \f[R] .fi +.SS Huawei OBS +.PP +Object Storage Service (OBS) provides stable, secure, efficient, and +easy-to-use cloud storage that lets you store virtually any volume of +unstructured data in any format and access it from anywhere. +.PP +OBS provides an S3 interface, you can copy and modify the following +configuration and add it to your rclone configuration file. +.IP +.nf +\f[C] +[obs] +type = s3 +provider = HuaweiOBS +access_key_id = your-access-key-id +secret_access_key = your-secret-access-key +region = af-south-1 +endpoint = obs.af-south-1.myhuaweicloud.com +acl = private +\f[R] +.fi +.PP +Or you can also configure via the interactive command line: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> obs +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \[rs] (s3) +[snip] +Storage> 5 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] + 9 / Huawei Object Storage Service + \[rs] (HuaweiOBS) +[snip] +provider> 9 +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> 1 +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> your-access-key-id +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> your-secret-access-key +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / AF-Johannesburg + \[rs] (af-south-1) + 2 / AP-Bangkok + \[rs] (ap-southeast-2) +[snip] +region> 1 +Option endpoint. +Endpoint for OBS API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / AF-Johannesburg + \[rs] (obs.af-south-1.myhuaweicloud.com) + 2 / AP-Bangkok + \[rs] (obs.ap-southeast-2.myhuaweicloud.com) +[snip] +endpoint> 1 +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) +[snip] +acl> 1 +Edit advanced config? +y) Yes +n) No (default) +y/n> +-------------------- +[obs] +type = s3 +provider = HuaweiOBS +access_key_id = your-access-key-id +secret_access_key = your-secret-access-key +region = af-south-1 +endpoint = obs.af-south-1.myhuaweicloud.com +acl = private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +obs s3 + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +\f[R] +.fi .SS IBM COS (S3) .PP Information stored with IBM Cloud Object Storage is encrypted and @@ -26192,12 +28851,12 @@ Choose a number from below, or type in your own value \[rs] \[dq]alias\[dq] 2 / Amazon Drive \[rs] \[dq]amazon cloud drive\[dq] - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS) \[rs] \[dq]s3\[dq] 4 / Backblaze B2 \[rs] \[dq]b2\[dq] [snip] - 23 / http Connection + 23 / HTTP \[rs] \[dq]http\[dq] Storage> 3 \f[R] @@ -26365,6 +29024,122 @@ Execute rclone commands rclone delete IBM-COS-XREGION:newbucket/file.txt \f[R] .fi +.SS IDrive e2 +.PP +Here is an example of making an IDrive e2 (https://www.idrive.com/e2/) +configuration. +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> e2 + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / IDrive e2 + \[rs] (IDrive) +[snip] +provider> IDrive + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> YOUR_ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> YOUR_SECRET_KEY + +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \[rs] (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \[rs] (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \[rs] (authenticated-read) + / Object owner gets FULL_CONTROL. + 5 | Bucket owner gets READ access. + | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \[rs] (bucket-owner-read) + / Both the object owner and the bucket owner get FULL_CONTROL over the object. + 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \[rs] (bucket-owner-full-control) +acl> + +Edit advanced config? +y) Yes +n) No (default) +y/n> + +Configuration complete. +Options: +- type: s3 +- provider: IDrive +- access_key_id: YOUR_ACCESS_KEY +- secret_access_key: YOUR_SECRET_KEY +- endpoint: q9d9.la12.idrivee2-5.com +Keep this \[dq]e2\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi .SS Minio .PP Minio (https://minio.io/) is an object storage server built for cloud @@ -26502,6 +29277,15 @@ server_side_encryption = storage_class = \f[R] .fi +.PP +C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is +the low-cost S3 Glacier alternative from Scaleway and it works the same +way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R]. +So you can configure your remote with the +\f[C]storage_class = GLACIER\f[R] option to upload directly to C14. +Don\[aq]t forget that in this state you can\[aq]t read files back after, +you will need to restore them to \[dq]STANDARD\[dq] storage_class first +before being able to read them (see \[dq]restore\[dq] section above) .SS Seagate Lyve Cloud .PP Seagate Lyve @@ -26533,7 +29317,7 @@ Choose \f[C]s3\f[R] backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS \[rs] (s3) [snip] Storage> s3 @@ -26750,7 +29534,7 @@ name> wasabi Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 (also Dreamhost, Ceph, Minio) +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -26872,7 +29656,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -26962,6 +29746,378 @@ d) Delete this remote y/e/d> y \f[R] .fi +.SS China Mobile Ecloud Elastic Object Storage (EOS) +.PP +Here is an example of making an China Mobile Ecloud Elastic Object +Storage (EOS) (https:///ecloud.10086.cn/home/product-introduction/eos/) +configuration. +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> ChinaMobile +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. + ... + 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + \[rs] (s3) + ... +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + ... + 4 / China Mobile Ecloud Elastic Object Storage (EOS) + \[rs] (ChinaMobile) + ... +provider> ChinaMobile +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> accesskeyid +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> secretaccesskey +Option endpoint. +Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / The default endpoint - a good choice if you are unsure. + 1 | East China (Suzhou) + \[rs] (eos-wuxi-1.cmecloud.cn) + 2 / East China (Jinan) + \[rs] (eos-jinan-1.cmecloud.cn) + 3 / East China (Hangzhou) + \[rs] (eos-ningbo-1.cmecloud.cn) + 4 / East China (Shanghai-1) + \[rs] (eos-shanghai-1.cmecloud.cn) + 5 / Central China (Zhengzhou) + \[rs] (eos-zhengzhou-1.cmecloud.cn) + 6 / Central China (Changsha-1) + \[rs] (eos-hunan-1.cmecloud.cn) + 7 / Central China (Changsha-2) + \[rs] (eos-zhuzhou-1.cmecloud.cn) + 8 / South China (Guangzhou-2) + \[rs] (eos-guangzhou-1.cmecloud.cn) + 9 / South China (Guangzhou-3) + \[rs] (eos-dongguan-1.cmecloud.cn) +10 / North China (Beijing-1) + \[rs] (eos-beijing-1.cmecloud.cn) +11 / North China (Beijing-2) + \[rs] (eos-beijing-2.cmecloud.cn) +12 / North China (Beijing-3) + \[rs] (eos-beijing-4.cmecloud.cn) +13 / North China (Huhehaote) + \[rs] (eos-huhehaote-1.cmecloud.cn) +14 / Southwest China (Chengdu) + \[rs] (eos-chengdu-1.cmecloud.cn) +15 / Southwest China (Chongqing) + \[rs] (eos-chongqing-1.cmecloud.cn) +16 / Southwest China (Guiyang) + \[rs] (eos-guiyang-1.cmecloud.cn) +17 / Nouthwest China (Xian) + \[rs] (eos-xian-1.cmecloud.cn) +18 / Yunnan China (Kunming) + \[rs] (eos-yunnan.cmecloud.cn) +19 / Yunnan China (Kunming-2) + \[rs] (eos-yunnan-2.cmecloud.cn) +20 / Tianjin China (Tianjin) + \[rs] (eos-tianjin-1.cmecloud.cn) +21 / Jilin China (Changchun) + \[rs] (eos-jilin-1.cmecloud.cn) +22 / Hubei China (Xiangyan) + \[rs] (eos-hubei-1.cmecloud.cn) +23 / Jiangxi China (Nanchang) + \[rs] (eos-jiangxi-1.cmecloud.cn) +24 / Gansu China (Lanzhou) + \[rs] (eos-gansu-1.cmecloud.cn) +25 / Shanxi China (Taiyuan) + \[rs] (eos-shanxi-1.cmecloud.cn) +26 / Liaoning China (Shenyang) + \[rs] (eos-liaoning-1.cmecloud.cn) +27 / Hebei China (Shijiazhuang) + \[rs] (eos-hebei-1.cmecloud.cn) +28 / Fujian China (Xiamen) + \[rs] (eos-fujian-1.cmecloud.cn) +29 / Guangxi China (Nanning) + \[rs] (eos-guangxi-1.cmecloud.cn) +30 / Anhui China (Huainan) + \[rs] (eos-anhui-1.cmecloud.cn) +endpoint> 1 +Option location_constraint. +Location constraint - must match endpoint. +Used when creating buckets only. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / East China (Suzhou) + \[rs] (wuxi1) + 2 / East China (Jinan) + \[rs] (jinan1) + 3 / East China (Hangzhou) + \[rs] (ningbo1) + 4 / East China (Shanghai-1) + \[rs] (shanghai1) + 5 / Central China (Zhengzhou) + \[rs] (zhengzhou1) + 6 / Central China (Changsha-1) + \[rs] (hunan1) + 7 / Central China (Changsha-2) + \[rs] (zhuzhou1) + 8 / South China (Guangzhou-2) + \[rs] (guangzhou1) + 9 / South China (Guangzhou-3) + \[rs] (dongguan1) +10 / North China (Beijing-1) + \[rs] (beijing1) +11 / North China (Beijing-2) + \[rs] (beijing2) +12 / North China (Beijing-3) + \[rs] (beijing4) +13 / North China (Huhehaote) + \[rs] (huhehaote1) +14 / Southwest China (Chengdu) + \[rs] (chengdu1) +15 / Southwest China (Chongqing) + \[rs] (chongqing1) +16 / Southwest China (Guiyang) + \[rs] (guiyang1) +17 / Nouthwest China (Xian) + \[rs] (xian1) +18 / Yunnan China (Kunming) + \[rs] (yunnan) +19 / Yunnan China (Kunming-2) + \[rs] (yunnan2) +20 / Tianjin China (Tianjin) + \[rs] (tianjin1) +21 / Jilin China (Changchun) + \[rs] (jilin1) +22 / Hubei China (Xiangyan) + \[rs] (hubei1) +23 / Jiangxi China (Nanchang) + \[rs] (jiangxi1) +24 / Gansu China (Lanzhou) + \[rs] (gansu1) +25 / Shanxi China (Taiyuan) + \[rs] (shanxi1) +26 / Liaoning China (Shenyang) + \[rs] (liaoning1) +27 / Hebei China (Shijiazhuang) + \[rs] (hebei1) +28 / Fujian China (Xiamen) + \[rs] (fujian1) +29 / Guangxi China (Nanning) + \[rs] (guangxi1) +30 / Anhui China (Huainan) + \[rs] (anhui1) +location_constraint> 1 +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \[rs] (public-read) + / Owner gets FULL_CONTROL. + 3 | The AllUsers group gets READ and WRITE access. + | Granting this on a bucket is generally not recommended. + \[rs] (public-read-write) + / Owner gets FULL_CONTROL. + 4 | The AuthenticatedUsers group gets READ access. + \[rs] (authenticated-read) + / Object owner gets FULL_CONTROL. +acl> private +Option server_side_encryption. +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / None + \[rs] () + 2 / AES256 + \[rs] (AES256) +server_side_encryption> +Option storage_class. +The storage class to use when storing new objects in ChinaMobile. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Default + \[rs] () + 2 / Standard storage class + \[rs] (STANDARD) + 3 / Archive storage mode + \[rs] (GLACIER) + 4 / Infrequent access storage mode + \[rs] (STANDARD_IA) +storage_class> +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[ChinaMobile] +type = s3 +provider = ChinaMobile +access_key_id = accesskeyid +secret_access_key = secretaccesskey +endpoint = eos-wuxi-1.cmecloud.cn +location_constraint = wuxi1 +acl = private +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS ArvanCloud +.PP +ArvanCloud (https://www.arvancloud.com/en/products/cloud-storage) +ArvanCloud Object Storage goes beyond the limited traditional file +storage. +It gives you access to backup and archived files and allows sharing. +Files like profile image in the app, images sent by users or scanned +documents can be stored securely and easily in our Object Storage +service. +.PP +ArvanCloud provides an S3 interface which can be configured for use with +rclone like this. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +n/s> n +name> ArvanCloud +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio) + \[rs] \[dq]s3\[dq] +[snip] +Storage> s3 +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get AWS credentials from the environment (env vars or IAM) + \[rs] \[dq]true\[dq] +env_auth> 1 +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> YOURACCESSKEY +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> YOURSECRETACCESSKEY +Region to connect to. +Choose a number from below, or type in your own value + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia, or Pacific Northwest. + | Leave location constraint empty. + \[rs] \[dq]us-east-1\[dq] +[snip] +region> +Endpoint for S3 API. +Leave blank if using ArvanCloud to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> s3.arvanstorage.com +Location constraint - must be set to match the Region. Used when creating buckets only. +Choose a number from below, or type in your own value + 1 / Empty for Iran-Tehran Region. + \[rs] \[dq]\[dq] +[snip] +location_constraint> +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \[rs] \[dq]private\[dq] +[snip] +acl> +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \[rs] \[dq]\[dq] + 2 / AES256 + \[rs] \[dq]AES256\[dq] +server_side_encryption> +The storage class to use when storing objects in S3. +Choose a number from below, or type in your own value + 1 / Default + \[rs] \[dq]\[dq] + 2 / Standard storage class + \[rs] \[dq]STANDARD\[dq] +storage_class> +Remote config +-------------------- +[ArvanCloud] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = ir-thr-at1 +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This will leave the config file looking like this. +.IP +.nf +\f[C] +[ArvanCloud] +type = s3 +provider = ArvanCloud +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = +endpoint = s3.arvanstorage.com +location_constraint = +acl = +server_side_encryption = +storage_class = +\f[R] +.fi .SS Tencent COS .PP Tencent Cloud Object Storage @@ -27004,7 +30160,7 @@ Choose a number from below, or type in your own value \[rs] \[dq]alias\[dq] 3 / Amazon Drive \[rs] \[dq]amazon cloud drive\[dq] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -27246,7 +30402,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Backblaze B2 .PP @@ -27463,6 +30619,12 @@ instead of hiding it. Old versions of files, where available, are visible using the \f[C]--b2-versions\f[R] flag. .PP +It is also possible to view a bucket as it was at a certain point in +time, using the \f[C]--b2-version-at\f[R] flag. +This will show the file versions as they were at that time, showing +files that have been deleted afterwards, and hiding files that were +created since. +.PP If you wish to remove all the old versions then you can use the \f[C]rclone cleanup remote:bucket\f[R] command which will delete all the old versions of files, leaving the current ones intact. @@ -27636,7 +30798,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx .fi .SS Standard options .PP -Here are the standard options specific to b2 (Backblaze B2). +Here are the Standard options specific to b2 (Backblaze B2). .SS --b2-account .PP Account ID or Application Key ID. @@ -27678,7 +30840,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to b2 (Backblaze B2). +Here are the Advanced options specific to b2 (Backblaze B2). .SS --b2-endpoint .PP Endpoint for the service. @@ -27737,6 +30899,22 @@ Env Var: RCLONE_B2_VERSIONS Type: bool .IP \[bu] 2 Default: false +.SS --b2-version-at +.PP +Show file versions as they were at the specified time. +.PP +Note that when using this no file write operations are permitted, so you +can\[aq]t upload files or delete them. +.PP +Properties: +.IP \[bu] 2 +Config: version_at +.IP \[bu] 2 +Env Var: RCLONE_B2_VERSION_AT +.IP \[bu] 2 +Type: Time +.IP \[bu] 2 +Default: off .SS --b2-upload-cutoff .PP Cutoff for switching to chunked upload. @@ -27912,7 +31090,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Box .PP @@ -28234,7 +31412,7 @@ you use \f[C]11xxxxxxxxx8\f[R] as the \f[C]root_folder_id\f[R] in the config. .SS Standard options .PP -Here are the standard options specific to box (Box). +Here are the Standard options specific to box (Box). .SS --box-client-id .PP OAuth Client Id. @@ -28327,7 +31505,7 @@ Rclone should act on behalf of a service account. .RE .SS Advanced options .PP -Here are the advanced options specific to box (Box). +Here are the Advanced options specific to box (Box). .SS --box-token .PP OAuth Access Token as a JSON blob. @@ -28469,7 +31647,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Cache (DEPRECATED) .PP @@ -28821,7 +31999,7 @@ Params: - \f[B]remote\f[R] = path to remote \f[B](required)\f[R] - \f[I](optional, false by default)\f[R] .SS Standard options .PP -Here are the standard options specific to cache (Cache a remote). +Here are the Standard options specific to cache (Cache a remote). .SS --cache-remote .PP Remote to cache. @@ -29000,7 +32178,7 @@ Examples: .RE .SS Advanced options .PP -Here are the advanced options specific to cache (Cache a remote). +Here are the Advanced options specific to cache (Cache a remote). .SS --cache-plex-token .PP The plex token for authentication - auto set normally. @@ -29286,9 +32464,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -29647,7 +32824,7 @@ Changing \f[C]transactions\f[R] is dangerous and requires explicit migration. .SS Standard options .PP -Here are the standard options specific to chunker (Transparently +Here are the Standard options specific to chunker (Transparently chunk/split large files). .SS --chunker-remote .PP @@ -29746,7 +32923,7 @@ Similar to \[dq]md5quick\[dq] but prefers SHA1 over MD5. .RE .SS Advanced options .PP -Here are the advanced options specific to chunker (Transparently +Here are the Advanced options specific to chunker (Transparently chunk/split large files). .SS --chunker-name-format .PP @@ -30151,7 +33328,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to sharefile (Citrix Sharefile). +Here are the Standard options specific to sharefile (Citrix Sharefile). .SS --sharefile-root-folder-id .PP ID of the root folder. @@ -30206,7 +33383,7 @@ connectors. .RE .SS Advanced options .PP -Here are the advanced options specific to sharefile (Citrix Sharefile). +Here are the Advanced options specific to sharefile (Citrix Sharefile). .SS --sharefile-upload-cutoff .PP Cutoff for switching to multipart upload. @@ -30286,7 +33463,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Crypt .PP @@ -30753,7 +33930,7 @@ crypted remote instead of \f[C]rclone check\f[R] which can\[aq]t check the checksums properly. .SS Standard options .PP -Here are the standard options specific to crypt (Encrypt/Decrypt a +Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). .SS --crypt-remote .PP @@ -30880,7 +34057,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to crypt (Encrypt/Decrypt a +Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). .SS --crypt-server-side-across-configs .PP @@ -31001,6 +34178,11 @@ Unicode codepoint instead of UTF-8 byte length. Onedrive) .RE .RE +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SS Backend commands .PP Here are the commands specific to the crypt backend. @@ -31015,9 +34197,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -31302,7 +34483,7 @@ The file names should not be changed by anything other than the rclone compression backend. .SS Standard options .PP -Here are the standard options specific to compress (Compress a remote). +Here are the Standard options specific to compress (Compress a remote). .SS --compress-remote .PP Remote to compress. @@ -31341,7 +34522,7 @@ Standard gzip compression with fastest parameters. .RE .SS Advanced options .PP -Here are the advanced options specific to compress (Compress a remote). +Here are the Advanced options specific to compress (Compress a remote). .SS --compress-level .PP GZIP compression level (-2 to 9). @@ -31381,6 +34562,196 @@ Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT Type: SizeSuffix .IP \[bu] 2 Default: 20Mi +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. +.SH Combine +.PP +The \f[C]combine\f[R] backend joins remotes together into a single +directory tree. +.PP +For example you might have a remote for images on one provider: +.IP +.nf +\f[C] +$ rclone tree s3:imagesbucket +/ +\[u251C]\[u2500]\[u2500] image1.jpg +\[u2514]\[u2500]\[u2500] image2.jpg +\f[R] +.fi +.PP +And a remote for files on another: +.IP +.nf +\f[C] +$ rclone tree drive:important/files +/ +\[u251C]\[u2500]\[u2500] file1.txt +\[u2514]\[u2500]\[u2500] file2.txt +\f[R] +.fi +.PP +The \f[C]combine\f[R] backend can join these together into a synthetic +directory structure like this: +.IP +.nf +\f[C] +$ rclone tree combined: +/ +\[u251C]\[u2500]\[u2500] files +\[br] \[u251C]\[u2500]\[u2500] file1.txt +\[br] \[u2514]\[u2500]\[u2500] file2.txt +\[u2514]\[u2500]\[u2500] images + \[u251C]\[u2500]\[u2500] image1.jpg + \[u2514]\[u2500]\[u2500] image2.jpg +\f[R] +.fi +.PP +You\[aq]d do this by specifying an \f[C]upstreams\f[R] parameter in the +config like this +.IP +.nf +\f[C] +upstreams = images=s3:imagesbucket files=drive:important/files +\f[R] +.fi +.PP +During the initial setup with \f[C]rclone config\f[R] you will specify +the upstreams remotes as a space separated list. +The upstream remotes can either be a local paths or other remotes. +.SS Configuration +.PP +Here is an example of how to make a combine called \f[C]remote\f[R] for +the example above. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +\&... +XX / Combine several remotes into one + \[rs] (combine) +\&... +Storage> combine +Option upstreams. +Upstreams for combining +These should be in the form + dir=remote:path dir2=remote2:path +Where before the = is specified the root directory and after is the remote to +put there. +Embedded spaces can be added using quotes + \[dq]dir=remote:path with space\[dq] \[dq]dir2=remote2:path with space\[dq] +Enter a fs.SpaceSepList value. +upstreams> images=s3:imagesbucket files=drive:important/files +-------------------- +[remote] +type = combine +upstreams = images=s3:imagesbucket files=drive:important/files +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS Configuring for Google Drive Shared Drives +.PP +Rclone has a convenience feature for making a combine backend for all +the shared drives you have access to. +.PP +Assuming your main (non shared drive) Google drive remote is called +\f[C]drive:\f[R] you would run +.IP +.nf +\f[C] +rclone backend -o config drives drive: +\f[R] +.fi +.PP +This would produce something like this: +.IP +.nf +\f[C] +[My Drive] +type = alias +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: + +[Test Drive] +type = alias +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + +[AllDrives] +type = combine +remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] +\f[R] +.fi +.PP +If you then add that config to your config file (find it with +\f[C]rclone config file\f[R]) then you can access all the shared drives +in one place with the \f[C]AllDrives:\f[R] remote. +.PP +See the Google Drive docs (https://rclone.org/drive/#drives) for full +info. +.SS Standard options +.PP +Here are the Standard options specific to combine (Combine several +remotes into one). +.SS --combine-upstreams +.PP +Upstreams for combining +.PP +These should be in the form +.IP +.nf +\f[C] +dir=remote:path dir2=remote2:path +\f[R] +.fi +.PP +Where before the = is specified the root directory and after is the +remote to put there. +.PP +Embedded spaces can be added using quotes +.IP +.nf +\f[C] +\[dq]dir=remote:path with space\[dq] \[dq]dir2=remote2:path with space\[dq] +\f[R] +.fi +.PP +Properties: +.IP \[bu] 2 +Config: upstreams +.IP \[bu] 2 +Env Var: RCLONE_COMBINE_UPSTREAMS +.IP \[bu] 2 +Type: SpaceSepList +.IP \[bu] 2 +Default: +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SH Dropbox .PP Paths are specified as \f[C]remote:path\f[R] @@ -31629,7 +35000,7 @@ Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode. .SS Standard options .PP -Here are the standard options specific to dropbox (Dropbox). +Here are the Standard options specific to dropbox (Dropbox). .SS --dropbox-client-id .PP OAuth Client Id. @@ -31662,7 +35033,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to dropbox (Dropbox). +Here are the Advanced options specific to dropbox (Dropbox). .SS --dropbox-token .PP OAuth Access Token as a JSON blob. @@ -32160,7 +35531,7 @@ $ rclone lsf --dirs-only -Fip --csv filefabric: The ID for \[dq]S3 Storage\[dq] would be \f[C]120673761\f[R]. .SS Standard options .PP -Here are the standard options specific to filefabric (Enterprise File +Here are the Standard options specific to filefabric (Enterprise File Fabric). .SS --filefabric-url .PP @@ -32239,7 +35610,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to filefabric (Enterprise File +Here are the Advanced options specific to filefabric (Enterprise File Fabric). .SS --filefabric-token .PP @@ -32348,7 +35719,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value [snip] -XX / FTP Connection +XX / FTP \[rs] \[dq]ftp\[dq] [snip] Storage> ftp @@ -32498,7 +35869,7 @@ VsFTPd. Just hit a selection number when prompted. .SS Standard options .PP -Here are the standard options specific to ftp (FTP Connection). +Here are the Standard options specific to ftp (FTP). .SS --ftp-host .PP FTP host to connect to. @@ -32595,7 +35966,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to ftp (FTP Connection). +Here are the Advanced options specific to ftp (FTP). .SS --ftp-concurrency .PP Maximum number of FTP simultaneous connections, 0 for unlimited. @@ -32648,6 +36019,19 @@ Env Var: RCLONE_FTP_DISABLE_MLSD Type: bool .IP \[bu] 2 Default: false +.SS --ftp-disable-utf8 +.PP +Disable using UTF-8 even if server advertises support. +.PP +Properties: +.IP \[bu] 2 +Config: disable_utf8 +.IP \[bu] 2 +Env Var: RCLONE_FTP_DISABLE_UTF8 +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --ftp-writing-mdtm .PP Use MDTM to set modification time (VsFtpd quirk) @@ -32812,7 +36196,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .PP The implementation of : \f[C]--dump headers\f[R], @@ -33195,7 +36579,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to google cloud storage (Google +Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). .SS --gcs-client-id .PP @@ -33734,7 +37118,7 @@ Durable reduced availability storage class .RE .SS Advanced options .PP -Here are the advanced options specific to google cloud storage (Google +Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). .SS --gcs-token .PP @@ -33779,6 +37163,44 @@ Env Var: RCLONE_GCS_TOKEN_URL Type: string .IP \[bu] 2 Required: false +.SS --gcs-no-check-bucket +.PP +If set, don\[aq]t attempt to check the bucket exists or create it. +.PP +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. +.PP +Properties: +.IP \[bu] 2 +Config: no_check_bucket +.IP \[bu] 2 +Env Var: RCLONE_GCS_NO_CHECK_BUCKET +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --gcs-decompress +.PP +If set this will decompress gzip encoded objects. +.PP +It is possible to upload objects to GCS with \[dq]Content-Encoding: +gzip\[dq] set. +Normally rclone will download these files files as compressed objects. +.PP +If this flag is set then rclone will decompress these files with +\[dq]Content-Encoding: gzip\[dq] as they are received. +This means that rclone can\[aq]t check the size and hash but the file +contents will be decompressed. +.PP +Properties: +.IP \[bu] 2 +Config: decompress +.IP \[bu] 2 +Env Var: RCLONE_GCS_DECOMPRESS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --gcs-encoding .PP The encoding for the backend. @@ -33804,7 +37226,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Google Drive .PP @@ -33867,8 +37289,6 @@ Choose a number from below, or type in your own value 5 | does not allow any access to read or download file content. \[rs] \[dq]drive.metadata.readonly\[dq] scope> 1 -ID of the root folder - leave blank normally. Fill in to access \[dq]Computers\[dq] folders. (see docs). -root_folder_id> Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. service_account_file> Remote config @@ -33979,6 +37399,7 @@ It does not allow rclone to download or upload data, or rename or delete files or directories. .SS Root folder ID .PP +This option has been moved to the advanced section. You can set the \f[C]root_folder_id\f[R] for rclone. This is the directory (identified by its \f[C]Folder ID\f[R]) that rclone considers to be the root of your drive. @@ -34495,6 +37916,13 @@ Description T} _ T{ +bmp +T}@T{ +image/bmp +T}@T{ +Windows Bitmap format +T} +T{ csv T}@T{ text/csv @@ -34502,6 +37930,13 @@ T}@T{ Standard CSV format for Spreadsheets T} T{ +doc +T}@T{ +application/msword +T}@T{ +Classic Word file +T} +T{ docx T}@T{ application/vnd.openxmlformats-officedocument.wordprocessingml.document @@ -34534,7 +37969,7 @@ json T}@T{ application/vnd.google-apps.script+json T}@T{ -JSON Text Format +JSON Text Format for Google Apps scripts T} T{ odp @@ -34572,6 +38007,13 @@ T}@T{ Adobe PDF Format T} T{ +pjpeg +T}@T{ +image/pjpeg +T}@T{ +Progressive JPEG Image +T} +T{ png T}@T{ image/png @@ -34614,6 +38056,20 @@ T}@T{ Plain Text T} T{ +wmf +T}@T{ +application/x-msmetafile +T}@T{ +Windows Meta File +T} +T{ +xls +T}@T{ +application/vnd.ms-excel +T}@T{ +Classic Excel file +T} +T{ xlsx T}@T{ application/vnd.openxmlformats-officedocument.spreadsheetml.sheet @@ -34678,7 +38134,7 @@ T} .TE .SS Standard options .PP -Here are the standard options specific to drive (Google Drive). +Here are the Standard options specific to drive (Google Drive). .SS --drive-client-id .PP Google Application Client Id Setting your own is recommended. @@ -34766,23 +38222,6 @@ Allows read-only access to file metadata but does not allow any access to read or download file content. .RE .RE -.SS --drive-root-folder-id -.PP -ID of the root folder. -Leave blank normally. -.PP -Fill in to access \[dq]Computers\[dq] folders (see docs), or for rclone -to use a non root folder as its starting point. -.PP -Properties: -.IP \[bu] 2 -Config: root_folder_id -.IP \[bu] 2 -Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false .SS --drive-service-account-file .PP Service Account Credentials JSON file path. @@ -34817,7 +38256,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to drive (Google Drive). +Here are the Advanced options specific to drive (Google Drive). .SS --drive-token .PP OAuth Access Token as a JSON blob. @@ -34861,6 +38300,23 @@ Env Var: RCLONE_DRIVE_TOKEN_URL Type: string .IP \[bu] 2 Required: false +.SS --drive-root-folder-id +.PP +ID of the root folder. +Leave blank normally. +.PP +Fill in to access \[dq]Computers\[dq] folders (see docs), or for rclone +to use a non root folder as its starting point. +.PP +Properties: +.IP \[bu] 2 +Config: root_folder_id +.IP \[bu] 2 +Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS --drive-service-account-credentials .PP Service Account Credentials JSON blob. @@ -35410,6 +38866,40 @@ Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS Type: bool .IP \[bu] 2 Default: false +.SS --drive-resource-key +.PP +Resource key for accessing a link-shared file. +.PP +If you need to access files shared with a link like this +.IP +.nf +\f[C] +https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing +\f[R] +.fi +.PP +Then you will need to use the first part \[dq]XXX\[dq] as the +\[dq]root_folder_id\[dq] and the second part \[dq]YYY\[dq] as the +\[dq]resource_key\[dq] otherwise you will get 404 not found errors when +trying to access the directory. +.PP +See: https://developers.google.com/drive/api/guides/resource-keys +.PP +This resource key requirement only applies to a subset of old files. +.PP +Note also that opening the folder once in the web interface (with the +user you\[aq]ve authenticated rclone with) seems to be enough so that +the resource key is no needed. +.PP +Properties: +.IP \[bu] 2 +Config: resource_key +.IP \[bu] 2 +Env Var: RCLONE_DRIVE_RESOURCE_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS --drive-encoding .PP The encoding for the backend. @@ -35440,9 +38930,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -35578,7 +39067,7 @@ This will return a JSON list of objects like this .PP With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives -found. +found and a combined drive. .IP .nf \f[C] @@ -35589,12 +39078,19 @@ remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: [Test Drive] type = alias remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + +[AllDrives] +type = combine +remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq] \f[R] .fi .PP Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. -This may require manual editing of the names. +Any illegal charactes will be substituted with \[dq]_\[dq] and duplicate +names will have numbers suffixed. +It will also add a remote called AllDrives which shows all the shared +drives combined into one directory tree. .SS untrash .PP Untrash files and directories @@ -35666,6 +39162,24 @@ If the destination is a drive backend then server-side copying will be attempted if possible. .PP Use the -i flag to see what would be copied before copying. +.SS exportformats +.PP +Dump the export formats for debug purposes +.IP +.nf +\f[C] +rclone backend exportformats remote: [options] [+] +\f[R] +.fi +.SS importformats +.PP +Dump the import formats for debug purposes +.IP +.nf +\f[C] +rclone backend importformats remote: [options] [+] +\f[R] +.fi .SS Limitations .PP Drive has quite a lot of rate limiting. @@ -35681,9 +39195,13 @@ You can disable server-side copies with \f[C]--disable copy\f[R] to download and upload the files if you prefer. .SS Limitations of Google Docs .PP -Google docs will appear as size -1 in \f[C]rclone ls\f[R] and as size 0 -in anything which uses the VFS layer, e.g. -\f[C]rclone mount\f[R], \f[C]rclone serve\f[R]. +Google docs will appear as size -1 in \f[C]rclone ls\f[R], +\f[C]rclone ncdu\f[R] etc, and as size 0 in anything which uses the VFS +layer, e.g. +\f[C]rclone mount\f[R] and \f[C]rclone serve\f[R]. +When calculating directory totals, e.g. +in \f[C]rclone size\f[R] and \f[C]rclone ncdu\f[R], they will be counted +in as empty files. .PP This is because rclone can\[aq]t find out the size of the Google docs without downloading them. @@ -35746,21 +39264,21 @@ recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. .PP Here is how to create your own Google Drive client ID for rclone: -.IP "1." 3 +.IP " 1." 4 Log into the Google API Console (https://console.developers.google.com/) with your Google account. It doesn\[aq]t matter what Google account you use. (It need not be the same account as the Google Drive you want to access) -.IP "2." 3 +.IP " 2." 4 Select a project or create a new project. -.IP "3." 3 +.IP " 3." 4 Under \[dq]ENABLE APIS AND SERVICES\[dq] search for \[dq]Drive\[dq], and enable the \[dq]Google Drive API\[dq]. -.IP "4." 3 +.IP " 4." 4 Click \[dq]Credentials\[dq] in the left-side panel (not \[dq]Create credentials\[dq], which opens the wizard), then \[dq]Create credentials\[dq] -.IP "5." 3 +.IP " 5." 4 If you already configured an \[dq]Oauth Consent Screen\[dq], then skip to the next step; if not, click on \[dq]CONFIGURE CONSENT SCREEN\[dq] button (near the top right corner of the right panel), then select @@ -35771,10 +39289,12 @@ enter an \[dq]Application name\[dq] (\[dq]rclone\[dq] is OK); enter \[dq]Save\[dq] (all other data is optional). Click again on \[dq]Credentials\[dq] on the left panel to go back to the \[dq]Credentials\[dq] screen. +.RS 4 .PP (PS: if you are a GSuite user, you could also select \[dq]Internal\[dq] -instead of \[dq]External\[dq] above, but this has not been -tested/documented so far). +instead of \[dq]External\[dq] above, but this will restrict API use to +Google Workspace users in your organisation). +.RE .IP " 6." 4 Click on the \[dq]+ CREATE CREDENTIALS\[dq] button at the top of the screen, then select \[dq]OAuth client ID\[dq]. @@ -35785,13 +39305,20 @@ Choose an application type of \[dq]Desktop app\[dq] and click .IP " 8." 4 It will show you a client ID and client secret. Make a note of these. +.RS 4 +.PP +(If you selected \[dq]External\[dq] at Step 5 continue to \[dq]Publish +App\[dq] in the Steps 9 and 10. +If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can +skip straight to Step 11.) +.RE .IP " 9." 4 Go to \[dq]Oauth consent screen\[dq] and press \[dq]Publish App\[dq] .IP "10." 4 -Provide the noted client ID and client secret to rclone. -.IP "11." 4 Click \[dq]OAuth consent screen\[dq], then click \[dq]PUBLISH APP\[dq] button and confirm, or add your account under \[dq]Test users\[dq]. +.IP "11." 4 +Provide the noted client ID and client secret to rclone. .PP Be aware that, due to the \[dq]enhanced security\[dq] recently introduced by Google, you are theoretically expected to \[dq]submit your @@ -36080,7 +39607,7 @@ you. This is similar to the Sharing tab in the Google Photos web interface. .SS Standard options .PP -Here are the standard options specific to google photos (Google Photos). +Here are the Standard options specific to google photos (Google Photos). .SS --gphotos-client-id .PP OAuth Client Id. @@ -36129,7 +39656,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to google photos (Google Photos). +Here are the Advanced options specific to google photos (Google Photos). .SS --gphotos-token .PP OAuth Access Token as a JSON blob. @@ -36544,7 +40071,7 @@ the files. .SS Configuration reference .SS Standard options .PP -Here are the standard options specific to hasher (Better checksums for +Here are the Standard options specific to hasher (Better checksums for other remotes). .SS --hasher-remote .PP @@ -36589,7 +40116,7 @@ Type: Duration Default: off .SS Advanced options .PP -Here are the advanced options specific to hasher (Better checksums for +Here are the Advanced options specific to hasher (Better checksums for other remotes). .SS --hasher-auto-size .PP @@ -36605,6 +40132,11 @@ Env Var: RCLONE_HASHER_AUTO_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 0 +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SS Backend commands .PP Here are the commands specific to the hasher backend. @@ -36619,9 +40151,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -36932,7 +40463,7 @@ Invalid UTF-8 bytes will also be replaced (https://rclone.org/overview/#invalid-utf8). .SS Standard options .PP -Here are the standard options specific to hdfs (Hadoop distributed file +Here are the Standard options specific to hdfs (Hadoop distributed file system). .SS --hdfs-namenode .PP @@ -36975,7 +40506,7 @@ Connect to hdfs as root. .RE .SS Advanced options .PP -Here are the advanced options specific to hdfs (Hadoop distributed file +Here are the Advanced options specific to hdfs (Hadoop distributed file system). .SS --hdfs-service-principal-name .PP @@ -37047,6 +40578,569 @@ Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot No server-side \f[C]Move\f[R] or \f[C]DirMove\f[R]. .IP \[bu] 2 Checksums not implemented. +.SH HiDrive +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.PP +The initial setup for hidrive involves getting a token from HiDrive +which you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. +.SS Configuration +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / HiDrive + \[rs] \[dq]hidrive\[dq] +[snip] +Storage> hidrive +OAuth Client Id - Leave blank normally. +client_id> +OAuth Client Secret - Leave blank normally. +client_secret> +Access permissions that rclone should use when requesting access from HiDrive. +Leave blank normally. +scope_access> +Edit advanced config? +y/n> n +Use auto config? +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = hidrive +token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]expiry\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq]} +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +\f[B]You should be aware that OAuth-tokens can be used to access your +account and hence should not be shared with other persons.\f[R] See the +below section for more information. +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from HiDrive. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +The webserver runs on \f[C]http://127.0.0.1:53682/\f[R]. +If local port \f[C]53682\f[R] is protected by a firewall you may need to +temporarily unblock the firewall to complete authorization. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your HiDrive root folder +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your HiDrive filesystem +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to a HiDrive directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Keeping your tokens safe +.PP +Any OAuth-tokens will be stored by rclone in the remote\[aq]s +configuration file as unencrypted text. +Anyone can use a valid refresh-token to access your HiDrive filesystem +without knowing your password. +Therefore you should make sure no one else can access your +configuration. +.PP +It is possible to encrypt rclone\[aq]s configuration file. +You can find information on securing your configuration file by viewing +the configuration encryption +docs (https://rclone.org/docs/#configuration-encryption). +.SS Invalid refresh token +.PP +As can be verified here (https://developer.hidrive.com/basics-flows/), +each \f[C]refresh_token\f[R] (for Native Applications) is valid for 60 +days. +If used to access HiDrivei, its validity will be automatically extended. +.PP +This means that if you +.IP \[bu] 2 +Don\[aq]t use the HiDrive remote for 60 days +.PP +then rclone will return an error which includes a text that implies the +refresh token is \f[I]invalid\f[R] or \f[I]expired\f[R]. +.PP +To fix this you will need to authorize rclone to access your HiDrive +account again. +.PP +Using +.IP +.nf +\f[C] +rclone config reconnect remote: +\f[R] +.fi +.PP +the process is very similar to the process of initial setup exemplified +before. +.SS Modified time and hashes +.PP +HiDrive allows modification times to be set on objects accurate to 1 +second. +.PP +HiDrive supports its own hash type (https://static.hidrive.com/dev/0001) +which is used to verify the integrety of file contents after successful +transfers. +.SS Restricted filename characters +.PP +HiDrive cannot store files or folders that include \f[C]/\f[R] (0x2F) or +null-bytes (0x00) in their name. +Any other characters can be used in the names of files or folders. +Additionally, files or folders cannot be named either of the following: +\f[C].\f[R] or \f[C]..\f[R] +.PP +Therefore rclone will automatically replace these characters, if files +or folders are stored or accessed with such names. +.PP +You can read about how this filename encoding works in general here. +.PP +Keep in mind that HiDrive only supports file or folder names with a +length of 255 characters or less. +.SS Transfers +.PP +HiDrive limits file sizes per single request to a maximum of 2 GiB. +To allow storage of larger files and allow for better upload +performance, the hidrive backend will use a chunked transfer for files +larger than 96 MiB. +Rclone will upload multiple parts/chunks of the file at the same time. +Chunks in the process of being uploaded are buffered in memory, so you +may want to restrict this behaviour on systems with limited resources. +.PP +You can customize this behaviour using the following options: +.IP \[bu] 2 +\f[C]chunk_size\f[R]: size of file parts +.IP \[bu] 2 +\f[C]upload_cutoff\f[R]: files larger or equal to this in size will use +a chunked transfer +.IP \[bu] 2 +\f[C]upload_concurrency\f[R]: number of file-parts to upload at the same +time +.PP +See the below section about configuration options for more details. +.SS Root folder +.PP +You can set the root folder for rclone. +This is the directory that rclone considers to be the root of your +HiDrive. +.PP +Usually, you will leave this blank, and rclone will use the root of the +account. +.PP +However, you can set this to restrict rclone to a specific folder +hierarchy. +.PP +This works by prepending the contents of the \f[C]root_prefix\f[R] +option to any paths accessed by rclone. +For example, the following two ways to access the home directory are +equivalent: +.IP +.nf +\f[C] +rclone lsd --hidrive-root-prefix=\[dq]/users/test/\[dq] remote:path + +rclone lsd remote:/users/test/path +\f[R] +.fi +.PP +See the below section about configuration options for more details. +.SS Directory member count +.PP +By default, rclone will know the number of directory members contained +in a directory. +For example, \f[C]rclone lsd\f[R] uses this information. +.PP +The acquisition of this information will result in additional time costs +for HiDrive\[aq]s API. +When dealing with large directory structures, it may be desirable to +circumvent this time cost, especially when this information is not +explicitly needed. +For this, the \f[C]disable_fetching_member_count\f[R] option can be +used. +.PP +See the below section about configuration options for more details. +.SS Standard options +.PP +Here are the Standard options specific to hidrive (HiDrive). +.SS --hidrive-client-id +.PP +OAuth Client Id. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --hidrive-client-secret +.PP +OAuth Client Secret. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --hidrive-scope-access +.PP +Access permissions that rclone should use when requesting access from +HiDrive. +.PP +Properties: +.IP \[bu] 2 +Config: scope_access +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_SCOPE_ACCESS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]rw\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]rw\[dq] +.RS 2 +.IP \[bu] 2 +Read and write access to resources. +.RE +.IP \[bu] 2 +\[dq]ro\[dq] +.RS 2 +.IP \[bu] 2 +Read-only access to resources. +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to hidrive (HiDrive). +.SS --hidrive-token +.PP +OAuth Access Token as a JSON blob. +.PP +Properties: +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --hidrive-auth-url +.PP +Auth server URL. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --hidrive-token-url +.PP +Token server url. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --hidrive-scope-role +.PP +User-level that rclone should use when requesting access from HiDrive. +.PP +Properties: +.IP \[bu] 2 +Config: scope_role +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_SCOPE_ROLE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]user\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]user\[dq] +.RS 2 +.IP \[bu] 2 +User-level access to management permissions. +.IP \[bu] 2 +This will be sufficient in most cases. +.RE +.IP \[bu] 2 +\[dq]admin\[dq] +.RS 2 +.IP \[bu] 2 +Extensive access to management permissions. +.RE +.IP \[bu] 2 +\[dq]owner\[dq] +.RS 2 +.IP \[bu] 2 +Full access to management permissions. +.RE +.RE +.SS --hidrive-root-prefix +.PP +The root/parent folder for all paths. +.PP +Fill in to use the specified folder as the parent for all paths given to +the remote. +This way rclone can use any folder as its starting point. +.PP +Properties: +.IP \[bu] 2 +Config: root_prefix +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_ROOT_PREFIX +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]/\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]/\[dq] +.RS 2 +.IP \[bu] 2 +The topmost directory accessible by rclone. +.IP \[bu] 2 +This will be equivalent with \[dq]root\[dq] if rclone uses a regular +HiDrive user account. +.RE +.IP \[bu] 2 +\[dq]root\[dq] +.RS 2 +.IP \[bu] 2 +The topmost directory of the HiDrive user account +.RE +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +This specifies that there is no root-prefix for your paths. +.IP \[bu] 2 +When using this you will always need to specify paths to this remote +with a valid parent e.g. +\[dq]remote:/path/to/dir\[dq] or \[dq]remote:root/path/to/dir\[dq]. +.RE +.RE +.SS --hidrive-endpoint +.PP +Endpoint for the service. +.PP +This is the URL that API-calls will be made to. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]https://api.hidrive.strato.com/2.1\[dq] +.SS --hidrive-disable-fetching-member-count +.PP +Do not fetch number of objects in directories unless it is absolutely +necessary. +.PP +Requests may be faster if the number of objects in subdirectories is not +fetched. +.PP +Properties: +.IP \[bu] 2 +Config: disable_fetching_member_count +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_DISABLE_FETCHING_MEMBER_COUNT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --hidrive-chunk-size +.PP +Chunksize for chunked uploads. +.PP +Any files larger than the configured cutoff (or files of unknown size) +will be uploaded in chunks of this size. +.PP +The upper limit for this is 2147483647 bytes (about 2.000Gi). +That is the maximum amount of bytes a single upload-operation will +support. +Setting this above the upper limit or to a negative value will cause +uploads to fail. +.PP +Setting this to larger values may increase the upload speed at the cost +of using more memory. +It can be set to smaller values smaller to save on memory. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 48Mi +.SS --hidrive-upload-cutoff +.PP +Cutoff/Threshold for chunked uploads. +.PP +Any files larger than this will be uploaded in chunks of the configured +chunksize. +.PP +The upper limit for this is 2147483647 bytes (about 2.000Gi). +That is the maximum amount of bytes a single upload-operation will +support. +Setting this above the upper limit will cause uploads to fail. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 96Mi +.SS --hidrive-upload-concurrency +.PP +Concurrency for chunked uploads. +.PP +This is the upper limit for how many transfers for the same file are +running concurrently. +Setting this above to a value smaller than 1 will cause uploads to +deadlock. +.PP +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 4 +.SS --hidrive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_HIDRIVE_ENCODING +.IP \[bu] 2 +Type: MultiEncoder +.IP \[bu] 2 +Default: Slash,Dot +.SS Limitations +.SS Symbolic links +.PP +HiDrive is able to store symbolic links (\f[I]symlinks\f[R]) by design, +for example, when unpacked from a zip archive. +.PP +There exists no direct mechanism to manage native symlinks in remotes. +As such this implementation has chosen to ignore any native symlinks +present in the remote. +rclone will not be able to access or show any symlinks stored in the +hidrive-remote. +This means symlinks cannot be individually removed, copied, or moved, +except when removing, copying, or moving the parent folder. +.PP +\f[I]This does not affect the \f[CI].rclonelink\f[I]-files that rclone +uses to encode and store symbolic links.\f[R] +.SS Sparse files +.PP +It is possible to store sparse files in HiDrive. +.PP +Note that copying a sparse file will expand the holes into null-byte +(0x00) regions that will then consume disk space. +Likewise, when downloading a sparse file, the resulting file will have +null-byte regions in the place of file holes. .SH HTTP .PP The HTTP remote is a read only remote for reading files of a webserver. @@ -37106,7 +41200,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / http Connection +XX / HTTP \[rs] \[dq]http\[dq] [snip] Storage> http @@ -37196,10 +41290,10 @@ rclone lsd :http,url=\[aq]https://beta.rclone.org\[aq]: .fi .SS Standard options .PP -Here are the standard options specific to http (http Connection). +Here are the Standard options specific to http (HTTP). .SS --http-url .PP -URL of http host to connect to. +URL of HTTP host to connect to. .PP E.g. \[dq]https://example.com\[dq], or @@ -37217,7 +41311,7 @@ Type: string Required: true .SS Advanced options .PP -Here are the advanced options specific to http (http Connection). +Here are the Advanced options specific to http (HTTP). .SS --http-headers .PP Set HTTP headers for all transactions. @@ -37304,7 +41398,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Hubic .PP @@ -37435,7 +41529,7 @@ Note that Hubic wraps the Swift backend, so most of the properties of are the same. .SS Standard options .PP -Here are the standard options specific to hubic (Hubic). +Here are the Standard options specific to hubic (Hubic). .SS --hubic-client-id .PP OAuth Client Id. @@ -37468,7 +41562,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to hubic (Hubic). +Here are the Advanced options specific to hubic (Hubic). .SS --hubic-token .PP OAuth Access Token as a JSON blob. @@ -37575,6 +41669,473 @@ credentials and ignores the expires field returned by the Hubic API. The Swift API doesn\[aq]t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the MD5SUM for these. +.SH Internet Archive +.PP +The Internet Archive backend utilizes Items on +archive.org (https://archive.org/) +.PP +Refer to IAS3 API +documentation (https://archive.org/services/docs/api/ias3.html) for the +API this backend uses. +.PP +Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for +the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:item/path/to/dir\f[R]. +.PP +Once you have made a remote (see the provider specific section above) +you can use it like this: +.PP +Unlike S3, listing up all items uploaded by you isn\[aq]t supported. +.PP +Make a new item +.IP +.nf +\f[C] +rclone mkdir remote:item +\f[R] +.fi +.PP +List the contents of a item +.IP +.nf +\f[C] +rclone ls remote:item +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote item, deleting any +excess files in the item. +.IP +.nf +\f[C] +rclone sync -i /home/local/directory remote:item +\f[R] +.fi +.SS Notes +.PP +Because of Internet Archive\[aq]s architecture, it enqueues write +operations (and extra post-processings) in a per-item queue. +You can check item\[aq]s queue at +https://catalogd.archive.org/history/item-name-here . +Because of that, all uploads/deletes will not show up immediately and +takes some time to be available. +The per-item queue is enqueued to an another queue, Item Deriver Queue. +You can check the status of Item Deriver Queue +here. (https://catalogd.archive.org/catalog.php?whereami=1) This queue +has a limit, and it may block you from uploading, or even deleting. +You should avoid uploading a lot of small files for better behavior. +.PP +You can optionally wait for the server\[aq]s processing to finish, by +setting non-zero value to \f[C]wait_archive\f[R] key. +By making it wait, rclone can do normal file comparison. +Make sure to set a large enough value (e.g. +\f[C]30m0s\f[R] for smaller files) as it can take a long time depending +on server\[aq]s queue. +.SS About metadata +.PP +This backend supports setting, updating and reading metadata of each +file. +The metadata will appear as file metadata on Internet Archive. +However, some fields are reserved by both Internet Archive and rclone. +.PP +The following are reserved by Internet Archive: - \f[C]name\f[R] - +\f[C]source\f[R] - \f[C]size\f[R] - \f[C]md5\f[R] - \f[C]crc32\f[R] - +\f[C]sha1\f[R] - \f[C]format\f[R] - \f[C]old_version\f[R] - +\f[C]viruscheck\f[R] +.PP +Trying to set values to these keys is ignored with a warning. +Only setting \f[C]mtime\f[R] is an exception. +Doing so make it the identical behavior as setting ModTime. +.PP +rclone reserves all the keys starting with \f[C]rclone-\f[R]. +Setting value for these keys will give you warnings, but values are set +according to request. +.PP +If there are multiple values for a key, only the first one is returned. +This is a limitation of rclone, that supports one value per one key. +It can be triggered when you did a server-side copy. +.PP +Reading metadata will also provide custom (non-standard nor reserved) +ones. +.SS Configuration +.PP +Here is an example of making an internetarchive configuration. +Most applies to the other providers as well, any differences are +described below. +.PP +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +XX / InternetArchive Items + \[rs] (internetarchive) +Storage> internetarchive +Option access_key_id. +IAS3 Access Key. +Leave blank for anonymous access. +You can find one here: https://archive.org/account/s3.php +Enter a value. Press Enter to leave empty. +access_key_id> XXXX +Option secret_access_key. +IAS3 Secret Key (password). +Leave blank for anonymous access. +Enter a value. Press Enter to leave empty. +secret_access_key> XXXX +Edit advanced config? +y) Yes +n) No (default) +y/n> y +Option endpoint. +IAS3 Endpoint. +Leave blank for default value. +Enter a string value. Press Enter for the default (https://s3.us.archive.org). +endpoint> +Option front_endpoint. +Host of InternetArchive Frontend. +Leave blank for default value. +Enter a string value. Press Enter for the default (https://archive.org). +front_endpoint> +Option disable_checksum. +Don\[aq]t store MD5 checksum with object metadata. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against checksum. +This is great for data integrity checking but can cause long delays for +large files to start uploading. +Enter a boolean value (true or false). Press Enter for the default (true). +disable_checksum> true +Option encoding. +The encoding for the backend. +See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot). +encoding> +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[remote] +type = internetarchive +access_key_id = XXXX +secret_access_key = XXXX +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS Standard options +.PP +Here are the Standard options specific to internetarchive (Internet +Archive). +.SS --internetarchive-access-key-id +.PP +IAS3 Access Key. +.PP +Leave blank for anonymous access. +You can find one here: https://archive.org/account/s3.php +.PP +Properties: +.IP \[bu] 2 +Config: access_key_id +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_ACCESS_KEY_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --internetarchive-secret-access-key +.PP +IAS3 Secret Key (password). +.PP +Leave blank for anonymous access. +.PP +Properties: +.IP \[bu] 2 +Config: secret_access_key +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_SECRET_ACCESS_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to internetarchive (Internet +Archive). +.SS --internetarchive-endpoint +.PP +IAS3 Endpoint. +.PP +Leave blank for default value. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]https://s3.us.archive.org\[dq] +.SS --internetarchive-front-endpoint +.PP +Host of InternetArchive Frontend. +.PP +Leave blank for default value. +.PP +Properties: +.IP \[bu] 2 +Config: front_endpoint +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_FRONT_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]https://archive.org\[dq] +.SS --internetarchive-disable-checksum +.PP +Don\[aq]t ask the server to test against MD5 checksum calculated by +rclone. +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can ask the server to check the object against +checksum. +This is great for data integrity checking but can cause long delays for +large files to start uploading. +.PP +Properties: +.IP \[bu] 2 +Config: disable_checksum +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_DISABLE_CHECKSUM +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --internetarchive-wait-archive +.PP +Timeout for waiting the server\[aq]s processing tasks (specifically +archive and book_op) to finish. +Only enable if you need to be guaranteed to be reflected after write +operations. +0 to disable waiting. +No errors to be thrown in case of timeout. +.PP +Properties: +.IP \[bu] 2 +Config: wait_archive +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_WAIT_ARCHIVE +.IP \[bu] 2 +Type: Duration +.IP \[bu] 2 +Default: 0s +.SS --internetarchive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_INTERNETARCHIVE_ENCODING +.IP \[bu] 2 +Type: MultiEncoder +.IP \[bu] 2 +Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +.SS Metadata +.PP +Metadata fields provided by Internet Archive. +If there are multiple values for a key, only the first one is returned. +This is a limitation of Rclone, that supports one value per one key. +.PP +Owner is able to add custom keys. +Metadata feature grabs all the keys including them. +.PP +Here are the possible system metadata items for the internetarchive +backend. +.PP +.TS +tab(@); +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). +T{ +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only +T} +_ +T{ +crc32 +T}@T{ +CRC32 calculated by Internet Archive +T}@T{ +string +T}@T{ +01234567 +T}@T{ +N +T} +T{ +format +T}@T{ +Name of format identified by Internet Archive +T}@T{ +string +T}@T{ +Comma-Separated Values +T}@T{ +N +T} +T{ +md5 +T}@T{ +MD5 hash calculated by Internet Archive +T}@T{ +string +T}@T{ +01234567012345670123456701234567 +T}@T{ +N +T} +T{ +mtime +T}@T{ +Time of last modification, managed by Rclone +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z +T}@T{ +N +T} +T{ +name +T}@T{ +Full file path, without the bucket part +T}@T{ +filename +T}@T{ +backend/internetarchive/internetarchive.go +T}@T{ +N +T} +T{ +old_version +T}@T{ +Whether the file was replaced and moved by keep-old-version flag +T}@T{ +boolean +T}@T{ +true +T}@T{ +N +T} +T{ +rclone-ia-mtime +T}@T{ +Time of last modification, managed by Internet Archive +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z +T}@T{ +N +T} +T{ +rclone-mtime +T}@T{ +Time of last modification, managed by Rclone +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z +T}@T{ +N +T} +T{ +rclone-update-track +T}@T{ +Random value used by Rclone for tracking changes inside Internet Archive +T}@T{ +string +T}@T{ +aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +T}@T{ +N +T} +T{ +sha1 +T}@T{ +SHA1 hash calculated by Internet Archive +T}@T{ +string +T}@T{ +0123456701234567012345670123456701234567 +T}@T{ +N +T} +T{ +size +T}@T{ +File size in bytes +T}@T{ +decimal number +T}@T{ +123456 +T}@T{ +N +T} +T{ +source +T}@T{ +The source of the file +T}@T{ +string +T}@T{ +original +T}@T{ +N +T} +T{ +viruscheck +T}@T{ +The last time viruscheck process was run for the file (?) +T}@T{ +unixtime +T}@T{ +1654191352 +T}@T{ +N +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SH Jottacloud .PP Jottacloud is a cloud storage service provider from a Norwegian company, @@ -37655,56 +42216,78 @@ s) Set configuration password q) Quit config n/s/q> n name> remote +Option Storage. Type of storage to configure. -Enter a string value. Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value +Choose a number from below, or type in your own value. [snip] XX / Jottacloud - \[rs] \[dq]jottacloud\[dq] + \[rs] (jottacloud) [snip] Storage> jottacloud -** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use legacy authentication?. -This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. +Edit advanced config? y) Yes n) No (default) y/n> n - -Generate a personal login token here: https://www.jottacloud.com/web/secure +Option config_type. +Select authentication type. +Choose a number from below, or type in an existing string value. +Press Enter for the default (standard). + / Standard authentication. + 1 | Use this if you\[aq]re a normal Jottacloud user. + \[rs] (standard) + / Legacy authentication. + 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + \[rs] (legacy) + / Telia Cloud authentication. + 3 | Use this if you are using Telia Cloud. + \[rs] (telia) + / Tele2 Cloud authentication. + 4 | Use this if you are using Tele2 Cloud. + \[rs] (tele2) +config_type> 1 +Personal login token. +Generate here: https://www.jottacloud.com/web/secure Login Token> - -Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? - +Use a non-standard device/mountpoint? +Choosing no, the default, will let you access the storage used for the archive +section of the official Jottacloud client. If you instead want to access the +sync or the backup section, for example, you must choose yes. y) Yes -n) No +n) No (default) y/n> y -Please select the device to use. Normally this will be Jotta -Choose a number from below, or type in an existing value +Option config_device. +The device to use. In standard setup the built-in Jotta device is used, +which contains predefined mountpoints for archive, sync etc. All other devices +are treated as backup devices by the official Jottacloud client. You may create +a new by entering a unique name. +Choose a number from below, or type in your own string value. +Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta -Devices> 2 -Please select the mountpoint to user. Normally this will be Archive -Choose a number from below, or type in an existing value +config_device> 2 +Option config_mountpoint. +The mountpoint to use for the built-in device Jotta. +The standard setup is to use the Archive mountpoint. Most other mountpoints +have very limited support in rclone and should generally be avoided. +Choose a number from below, or type in an existing string value. +Press Enter for the default (Archive). 1 > Archive - 2 > Links + 2 > Shared 3 > Sync - -Mountpoints> 1 +config_mountpoint> 1 -------------------- -[jotta] +[remote] type = jottacloud +configVersion = 1 +client_id = jottacli +client_secret = +tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} +username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive -configVersion = 1 -------------------- -y) Yes this is OK +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y @@ -37739,22 +42322,32 @@ rclone copy /home/source remote:backup .SS Devices and Mountpoints .PP The official Jottacloud client registers a device for each computer you -install it on, and then creates a mountpoint for each folder you select -for Backup. -The web interface uses a special device called Jotta for the Archive and -Sync mountpoints. +install it on, and shows them in the backup section of the user +interface. +For each folder you select for backup it will create a mountpoint within +this device. +A built-in device called Jotta is special, and contains mountpoints +Archive, Sync and some others, used for corresponding features in +official clients. .PP -With rclone you\[aq]ll want to use the Jotta/Archive device/mountpoint -in most cases, however if you want to access files uploaded by any of -the official clients rclone provides the option to select other devices -and mountpoints during config. -Note that uploading files is currently not supported to other devices -than Jotta. +With rclone you\[aq]ll want to use the standard Jotta/Archive +device/mountpoint in most cases. +However, you may for example want to access files from the sync or +backup functionality provided by the official clients, and rclone +therefore provides the option to select other devices and mountpoints +during config. .PP -The built-in Jotta device may also contain several other mountpoints, -such as: Latest, Links, Shared and Trash. -These are special mountpoints with a different internal representation -than the \[dq]regular\[dq] mountpoints. +You are allowed to create new devices and mountpoints. +All devices except the built-in Jotta device are treated as backup +devices by official Jottacloud clients, and the mountpoints on them are +individual backup sets. +.PP +With the built-in Jotta device, only existing, built-in, mountpoints can +be selected. +In addition to the mentioned Archive and Sync, it may contain several +other mountpoints such as: Latest, Links, Shared and Trash. +All of these are special mountpoints with a different internal +representation than the \[dq]regular\[dq] mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. .SS --fast-list @@ -37893,7 +42486,7 @@ To view your current quota you can use the limit (unless it is unlimited) and the current usage. .SS Advanced options .PP -Here are the advanced options specific to jottacloud (Jottacloud). +Here are the Advanced options specific to jottacloud (Jottacloud). .SS --jottacloud-md5-memory-limit .PP Files bigger than this will be cached on disk to calculate the MD5 if @@ -38145,7 +42738,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in XML strings. .SS Standard options .PP -Here are the standard options specific to koofr (Koofr, Digi Storage and +Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). .SS --koofr-provider .PP @@ -38269,7 +42862,7 @@ Type: string Required: true .SS Advanced options .PP -Here are the advanced options specific to koofr (Koofr, Digi Storage and +Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). .SS --koofr-mountid .PP @@ -38728,7 +43321,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to mailru (Mail.ru Cloud). +Here are the Standard options specific to mailru (Mail.ru Cloud). .SS --mailru-user .PP User name (usually email). @@ -38801,7 +43394,7 @@ Disable .RE .SS Advanced options .PP -Here are the advanced options specific to mailru (Mail.ru Cloud). +Here are the Advanced options specific to mailru (Mail.ru Cloud). .SS --mailru-speedup-file-patterns .PP Comma separated list of file name patterns eligible for speedup (put by @@ -39154,6 +43747,48 @@ messages in the log about duplicates. .PP Use \f[C]rclone dedupe\f[R] to fix duplicated files. .SS Failure to log-in +.SS Object not found +.PP +If you are connecting to your Mega remote for the first time, to test +access and syncronisation, you may receive an error such as +.IP +.nf +\f[C] +Failed to create file system for \[dq]my-mega-remote:\[dq]: +couldn\[aq]t login: Object (typically, node or user) not found +\f[R] +.fi +.PP +The diagnostic steps often recommended in the rclone +forum (https://forum.rclone.org/search?q=mega) start with the +\f[B]MEGAcmd\f[R] utility. +Note that this refers to the official C++ command from +https://github.com/meganz/MEGAcmd and not the go language built command +from t3rm1n4l/megacmd that is no longer maintained. +.PP +Follow the instructions for installing MEGAcmd and try accessing your +remote as they recommend. +You can establish whether or not you can log in using MEGAcmd, and +obtain diagnostic information to help you, and search or work with +others in the forum. +.IP +.nf +\f[C] +MEGA CMD> login me\[at]example.com +Password: +Fetching nodes ... +Loading transfers from local cache +Login complete as me\[at]example.com +me\[at]example.com:/$ +\f[R] +.fi +.PP +Note that some have found issues with passwords containing special +characters. +If you can not log on with rclone, but MEGAcmd logs on just fine, then +consider changing your password temporarily to pure alphanumeric +characters, in case that helps. +.SS Repeated commands blocks access .PP Mega remotes seem to get blocked (reject logins) under \[dq]heavy use\[dq]. @@ -39205,7 +43840,7 @@ and you are sure the user and the password are correct, likely you have got the remote blocked for a while. .SS Standard options .PP -Here are the standard options specific to mega (Mega). +Here are the Standard options specific to mega (Mega). .SS --mega-user .PP User name. @@ -39237,7 +43872,7 @@ Type: string Required: true .SS Advanced options .PP -Here are the advanced options specific to mega (Mega). +Here are the Advanced options specific to mega (Mega). .SS --mega-debug .PP Output more debug from Mega. @@ -39360,7 +43995,7 @@ to 1 nS. .PP The memory backend replaces the default restricted characters set (https://rclone.org/overview/#restricted-characters). -.SS Akamai NetStorage +.SH Akamai NetStorage .PP Paths are specified as \f[C]remote:\f[R] You may put subdirectories in too, e.g. @@ -39377,6 +44012,7 @@ For example, this is commonly configured with or without a CP code: * See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use \f[C]rclone config\f[R] to walk you through the setup process. +.SS Configuration .PP Here\[aq]s an example of how to make a remote called \f[C]ns1\f[R]. .IP "1." 3 @@ -39535,6 +44171,7 @@ You can\[aq]t perform operations between different remotes. rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ \f[R] .fi +.SS Features .SS Symlink Support .PP The Netstorage backend changes the rclone \f[C]--links, -l\f[R] @@ -39591,7 +44228,7 @@ such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. -.SS ListR Feature +.SS \f[C]--fast-list\f[R] / ListR support .PP NetStorage remote supports the ListR feature by using the \[dq]list\[dq] NetStorage API action to return a lexicographical list of all objects @@ -39621,7 +44258,7 @@ display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass \[dq]--disable listR\[dq] flag if these numbers are important in the output. -.SS Purge Feature +.SS Purge .PP NetStorage remote supports the purge feature by using the \[dq]quick-delete\[dq] NetStorage API action. @@ -39639,7 +44276,7 @@ immediately and objects targeted for quick-delete may still be accessible. .SS Standard options .PP -Here are the standard options specific to netstorage (Akamai +Here are the Standard options specific to netstorage (Akamai NetStorage). .SS --netstorage-host .PP @@ -39690,7 +44327,7 @@ Type: string Required: true .SS Advanced options .PP -Here are the advanced options specific to netstorage (Akamai +Here are the Advanced options specific to netstorage (Akamai NetStorage). .SS --netstorage-protocol .PP @@ -39738,9 +44375,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -39997,7 +44633,7 @@ parties access to a single container or putting credentials into an untrusted environment such as a CI build server. .SS Standard options .PP -Here are the standard options specific to azureblob (Microsoft Azure +Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). .SS --azureblob-account .PP @@ -40118,7 +44754,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to azureblob (Microsoft Azure +Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage). .SS --azureblob-msi-object-id .PP @@ -40450,15 +45086,22 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SS Azure Storage Emulator Support .PP -You can test rclone with storage emulator locally, to do this make sure -azure storage emulator installed locally and set up a new remote with -\f[C]rclone config\f[R] follow instructions described in introduction, -set \f[C]use_emulator\f[R] config as \f[C]true\f[R], you do not need to -provide default account name or key if using emulator. +You can run rclone with storage emulator (usually \f[I]azurite\f[R]). +.PP +To do this, just set up a new remote with \f[C]rclone config\f[R] +following instructions described in introduction and set +\f[C]use_emulator\f[R] config as \f[C]true\f[R]. +You do not need to provide default account name neither an account key. +.PP +Also, if you want to access a storage emulator instance running on a +different machine, you can override \f[I]Endpoint\f[R] parameter in +advanced settings, setting it to +\f[C]http(s)://:/devstoreaccount1\f[R] (e.g. +\f[C]http://10.254.2.5:10000/devstoreaccount1\f[R]). .SH Microsoft OneDrive .PP Paths are specified as \f[C]remote:path\f[R] @@ -40595,13 +45238,17 @@ rclone copy /home/source remote:backup .fi .SS Getting your own Client ID and Key .PP -You can use your own Client ID if the default (\f[C]client_id\f[R] left -blank) one doesn\[aq]t work for you or you see lots of throttling. -The default Client ID and Key is shared by all rclone users when +rclone uses a default Client ID when talking to OneDrive, unless a +custom \f[C]client_id\f[R] is specified in the config. +The default Client ID and Key are shared by all rclone users when performing requests. .PP -If you are having problems with them (E.g., seeing a lot of throttling), -you can get your own Client ID and Key by following the steps below: +You may choose to create and use your own Client ID, in case the default +one does not work well for you. +For example, you might see throtting. +.SS Creating Client ID for OneDrive Personal +.PP +To create your own Client ID, please follow these steps: .IP "1." 3 Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade @@ -40628,8 +45275,8 @@ select \f[C]delegated permissions\f[R]. Search and select the following permissions: \f[C]Files.Read\f[R], \f[C]Files.ReadWrite\f[R], \f[C]Files.Read.All\f[R], \f[C]Files.ReadWrite.All\f[R], \f[C]offline_access\f[R], -\f[C]User.Read\f[R], and optionally \f[C]Sites.Read.All\f[R] (see -below). +\f[C]User.Read\f[R] and \f[C]Sites.Read.All\f[R] (if custom access +scopes are configured, select the permissions accordingly). Once selected click \f[C]Add permissions\f[R] at the bottom. .PP Now the application is complete. @@ -40637,12 +45284,51 @@ Run \f[C]rclone config\f[R] to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. .PP +The access_scopes option allows you to configure the permissions +requested by rclone. +See Microsoft +Docs (https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) +for more information about the different scopes. +.PP The \f[C]Sites.Read.All\f[R] permission is required if you need to search SharePoint sites when configuring the remote (https://github.com/rclone/rclone/pull/5883). -However, if that permission is not assigned, you need to set +However, if that permission is not assigned, you need to exclude +\f[C]Sites.Read.All\f[R] from your access scopes or set \f[C]disable_site_permission\f[R] option to true in the advanced options. +.SS Creating Client ID for OneDrive Business +.PP +The steps for OneDrive Personal may or may not work for OneDrive +Business, depending on the security settings of the organization. +A common error is that the publisher of the App is not verified. +.PP +You may try to verify you +account (https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), +or try to limit the App to your organization only, as shown below. +.IP "1." 3 +Make sure to create the App with your business account. +.IP "2." 3 +Follow the steps above to create an App. +However, we need a different account type here: +\f[C]Accounts in this organizational directory only (*** - Single tenant)\f[R]. +Note that you can also change the account type aftering creating the +App. +.IP "3." 3 +Find the tenant +ID (https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) +of your organization. +.IP "4." 3 +In the rclone config, set \f[C]auth_url\f[R] to +\f[C]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize\f[R]. +.IP "5." 3 +In the rclone config, set \f[C]token_url\f[R] to +\f[C]https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token\f[R]. +.PP +Note: If you have a special region, you may need a different host in +step 4 and 5. +Here are some +hints (https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). .SS Modification time and hashes .PP OneDrive allows modification times to be set on objects accurate to 1 @@ -40800,7 +45486,7 @@ empty the trash, so you will have to do that with one of Microsoft\[aq]s apps or via the OneDrive website. .SS Standard options .PP -Here are the standard options specific to onedrive (Microsoft OneDrive). +Here are the Standard options specific to onedrive (Microsoft OneDrive). .SS --onedrive-client-id .PP OAuth Client Id. @@ -40874,7 +45560,7 @@ Azure and Office 365 operated by 21Vianet in China .RE .SS Advanced options .PP -Here are the advanced options specific to onedrive (Microsoft OneDrive). +Here are the Advanced options specific to onedrive (Microsoft OneDrive). .SS --onedrive-token .PP OAuth Access Token as a JSON blob. @@ -40982,6 +45668,50 @@ Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID Type: string .IP \[bu] 2 Required: false +.SS --onedrive-access-scopes +.PP +Set scopes to be requested by rclone. +.PP +Choose or manually enter a custom space separated list with all scopes, +that rclone should request. +.PP +Properties: +.IP \[bu] 2 +Config: access_scopes +.IP \[bu] 2 +Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES +.IP \[bu] 2 +Type: SpaceSepList +.IP \[bu] 2 +Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All +Sites.Read.All offline_access +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All +Sites.Read.All offline_access\[dq] +.RS 2 +.IP \[bu] 2 +Read and write access to all resources +.RE +.IP \[bu] 2 +\[dq]Files.Read Files.Read.All Sites.Read.All offline_access\[dq] +.RS 2 +.IP \[bu] 2 +Read only access to all resources +.RE +.IP \[bu] 2 +\[dq]Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All +offline_access\[dq] +.RS 2 +.IP \[bu] 2 +Read and write access to all resources, without the ability to browse +SharePoint sites. +.IP \[bu] 2 +Same as if disable_site_permission was set to true +.RE +.RE .SS --onedrive-disable-site-permission .PP Disable the request for Sites.Read.All permission. @@ -41370,7 +46100,7 @@ known (https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return \[dq]item not found\[dq] errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, -etc.). +etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the \f[C]--backup-dir \f[R] command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting @@ -41399,7 +46129,7 @@ your account. You can\[aq]t do much about it, maybe write an email to your admins. .PP However, there are other ways to interact with your OneDrive account. -Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint +Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint .SS invalid_grant (AADSTS50076) .IP .nf @@ -41653,7 +46383,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to opendrive (OpenDrive). +Here are the Standard options specific to opendrive (OpenDrive). .SS --opendrive-username .PP Username. @@ -41685,7 +46415,7 @@ Type: string Required: true .SS Advanced options .PP -Here are the advanced options specific to opendrive (OpenDrive). +Here are the Advanced options specific to opendrive (OpenDrive). .SS --opendrive-encoding .PP The encoding for the backend. @@ -41739,7 +46469,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH QingStor .PP @@ -41918,7 +46648,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to qingstor (QingCloud Object +Here are the Standard options specific to qingstor (QingCloud Object Storage). .SS --qingstor-env-auth .PP @@ -42042,7 +46772,7 @@ Needs location constraint gd2a. .RE .SS Advanced options .PP -Here are the advanced options specific to qingstor (QingCloud Object +Here are the Advanced options specific to qingstor (QingCloud Object Storage). .SS --qingstor-connection-retries .PP @@ -42142,7 +46872,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Sia .PP @@ -42298,7 +47028,7 @@ rclone copy /home/source mySia:backup .fi .SS Standard options .PP -Here are the standard options specific to sia (Sia Decentralized Cloud). +Here are the Standard options specific to sia (Sia Decentralized Cloud). .SS --sia-api-url .PP Sia daemon API URL, like http://sia.daemon.host:9980. @@ -42337,7 +47067,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to sia (Sia Decentralized Cloud). +Here are the Advanced options specific to sia (Sia Decentralized Cloud). .SS --sia-user-agent .PP Siad User Agent @@ -42682,7 +47412,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to swift (OpenStack Swift +Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). .SS --swift-env-auth .PP @@ -43032,7 +47762,7 @@ OVH Public Cloud Archive .RE .SS Advanced options .PP -Here are the advanced options specific to swift (OpenStack Swift +Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). .SS --swift-leave-parts-on-error .PP @@ -43294,6 +48024,14 @@ be used in JSON strings. Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. \f[C]rclone cleanup\f[R] can be used to empty the trash. +.SS Emptying the trash +.PP +Due to an API limitation, the \f[C]rclone cleanup\f[R] command will only +work if you set your username and password in the advanced options for +this backend. +Since we generally want to avoid storing user passwords in the rclone +config file, we advise you to only set this up if you need the +\f[C]rclone cleanup\f[R] command to work. .SS Root folder ID .PP You can set the \f[C]root_folder_id\f[R] for rclone. @@ -43317,7 +48055,7 @@ in the browser, then you use \f[C]5xxxxxxxx8\f[R] as the \f[C]root_folder_id\f[R] in the config. .SS Standard options .PP -Here are the standard options specific to pcloud (Pcloud). +Here are the Standard options specific to pcloud (Pcloud). .SS --pcloud-client-id .PP OAuth Client Id. @@ -43350,7 +48088,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to pcloud (Pcloud). +Here are the Advanced options specific to pcloud (Pcloud). .SS --pcloud-token .PP OAuth Access Token as a JSON blob. @@ -43456,6 +48194,40 @@ Original/US region EU region .RE .RE +.SS --pcloud-username +.PP +Your pcloud username. +.PP +This is only required when you want to use the cleanup command. +Due to a bug in the pcloud API the required API does not support OAuth +authentication so we have to rely on user password authentication for +it. +.PP +Properties: +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-password +.PP +Your pcloud password. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SH premiumize.me .PP Paths are specified as \f[C]remote:path\f[R] @@ -43598,7 +48370,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to premiumizeme (premiumize.me). +Here are the Standard options specific to premiumizeme (premiumize.me). .SS --premiumizeme-api-key .PP API Key. @@ -43616,7 +48388,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to premiumizeme (premiumize.me). +Here are the Advanced options specific to premiumizeme (premiumize.me). .SS --premiumizeme-encoding .PP The encoding for the backend. @@ -43786,7 +48558,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Advanced options .PP -Here are the advanced options specific to putio (Put.io). +Here are the Advanced options specific to putio (Put.io). .SS --putio-encoding .PP The encoding for the backend. @@ -43803,6 +48575,16 @@ Env Var: RCLONE_PUTIO_ENCODING Type: MultiEncoder .IP \[bu] 2 Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS Limitations +.PP +put.io has rate limiting. +When you hit a limit, rclone automatically retries after waiting the +amount of time requested by the server. +.PP +If you want to avoid ever hitting these limits, you may use the +\f[C]--tpslimit\f[R] flag with a low number. +Note that the imposed limits may be different for different operations, +and may change over time. .SH Seafile .PP This is a backend for the Seafile (https://www.seafile.com/) storage @@ -44147,7 +48929,7 @@ Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work properly. .SS Standard options .PP -Here are the standard options specific to seafile (seafile). +Here are the Standard options specific to seafile (seafile). .SS --seafile-url .PP URL of seafile host to connect to. @@ -44262,7 +49044,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to seafile (seafile). +Here are the Advanced options specific to seafile (seafile). .SS --seafile-create-library .PP Should rclone create a library if it doesn\[aq]t exist. @@ -44299,7 +49081,7 @@ Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). .PP The SFTP backend can be used with a number of different providers: .IP \[bu] 2 -C14 +Hetzner Storage Box .IP \[bu] 2 rsync.net .PP @@ -44319,7 +49101,11 @@ remote machine (i.e. .PP Note that some SFTP servers will need the leading / - Synology is a good example of this. -rsync.net, on the other hand, requires users to OMIT the leading /. +rsync.net and Hetzner, on the other hand, requires users to OMIT the +leading /. +.PP +Note that by default rclone will try to execute shell commands on the +server, see shell access considerations. .SS Configuration .PP Here is an example of making an SFTP configuration. @@ -44344,7 +49130,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / SSH/SFTP Connection +XX / SSH/SFTP \[rs] \[dq]sftp\[dq] [snip] Storage> sftp @@ -44618,6 +49404,140 @@ eval \[ga]ssh-agent -k\[ga] .fi .PP These commands can be used in scripts of course. +.SS Shell access +.PP +Some functionality of the SFTP backend relies on remote shell access, +and the possibility to execute commands. +This includes checksum, and in some cases also about. +The shell commands that must be executed may be different on different +type of shells, and also quoting/escaping of file path arguments +containing special characters may be different. +Rclone therefore needs to know what type of shell it is, and if shell +access is available at all. +.PP +Most servers run on some version of Unix, and then a basic Unix shell +can be assumed, without further distinction. +Windows 10, Server 2019, and later can also run a SSH server, which is a +port of OpenSSH (see official installation +guide (https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). +On a Windows server the shell handling is different: Although it can +also be set up to use a Unix type shell, e.g. +Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and +PowerShell is a recommended alternative. +All of these have bahave differently, which rclone must handle. +.PP +Rclone tries to auto-detect what type of shell is used on the server, +first time you access the SFTP remote. +If a remote shell session is successfully created, it will look for +indications that it is CMD or PowerShell, with fall-back to Unix if not +something else is detected. +If unable to even create a remote shell session, then shell command +execution will be disabled entirely. +The result is stored in the SFTP remote configuration, in option +\f[C]shell_type\f[R], so that the auto-detection only have to be +performed once. +If you manually set a value for this option before first run, the +auto-detection will be skipped, and if you set a different value later +this will override any existing. +Value \f[C]none\f[R] can be set to avoid any attempts at executing shell +commands, e.g. +if this is not allowed on the server. +.PP +When the server is rclone serve +sftp (https://rclone.org/commands/rclone_serve_sftp/), the rclone SFTP +remote will detect this as a Unix type shell - even if it is running on +Windows. +This server does not actually have a shell, but it accepts input +commands matching the specific ones that the SFTP backend relies on for +Unix shells, e.g. +\f[C]md5sum\f[R] and \f[C]df\f[R]. +Also it handles the string escape rules used for Unix shell. +Treating it as a Unix type shell from a SFTP remote will therefore +always be correct, and support all features. +.SS Shell access considerations +.PP +The shell type auto-detection logic, described above, means that by +default rclone will try to run a shell command the first time a new sftp +remote is accessed. +If you configure a sftp remote without a config file, e.g. +an on the fly (https://rclone.org/docs/#backend-path-to-dir%5D) remote, +rclone will have nowhere to store the result, and it will re-run the +command on every access. +To avoid this you should explicitely set the \f[C]shell_type\f[R] option +to the correct value, or to \f[C]none\f[R] if you want to prevent rclone +from executing any remote shell commands. +.PP +It is also important to note that, since the shell type decides how +quoting and escaping of file paths used as command-line arguments are +performed, configuring the wrong shell type may leave you exposed to +command injection exploits. +Make sure to confirm the auto-detected shell type, or explicitely set +the shell type you know is correct, or disable shell access until you +know. +.SS Checksum +.PP +SFTP does not natively support checksums (file hash), but rclone is able +to use checksumming if the same login has shell access, and can execute +remote commands. +If there is a command that can calculate compatible checksums on the +remote system, Rclone can then be configured to execute this whenever a +checksum is needed, and read back the results. +Currently MD5 and SHA-1 are supported. +.PP +Normally this requires an external utility being available on the +server. +By default rclone will try commands \f[C]md5sum\f[R], \f[C]md5\f[R] and +\f[C]rclone md5sum\f[R] for MD5 checksums, and the first one found +usable will be picked. +Same with \f[C]sha1sum\f[R], \f[C]sha1\f[R] and \f[C]rclone sha1sum\f[R] +commands for SHA-1 checksums. +These utilities normally need to be in the remote\[aq]s PATH to be +found. +.PP +In some cases the shell itself is capable of calculating checksums. +PowerShell is an example of such a shell. +If rclone detects that the remote shell is PowerShell, which means it +most probably is a Windows OpenSSH server, rclone will use a predefined +script block to produce the checksums when no external checksum commands +are found (see shell access). +This assumes PowerShell version 4.0 or newer. +.PP +The options \f[C]md5sum_command\f[R] and \f[C]sha1_command\f[R] can be +used to customize the command to be executed for calculation of +checksums. +You can for example set a specific path to where md5sum and sha1sum +executables are located, or use them to specify some other tools that +print checksums in compatible format. +The value can include command-line arguments, or even shell script +blocks as with PowerShell. +Rclone has subcommands +md5sum (https://rclone.org/commands/rclone_md5sum/) and +sha1sum (https://rclone.org/commands/rclone_sha1sum/) that use +compatible format, which means if you have an rclone executable on the +server it can be used. +As mentioned above, they will be automatically picked up if found in +PATH, but if not you can set something like +\f[C]/path/to/rclone md5sum\f[R] as the value of option +\f[C]md5sum_command\f[R] to make sure a specific executable is used. +.PP +Remote checksumming is recommended and enabled by default. +First time rclone is using a SFTP remote, if options +\f[C]md5sum_command\f[R] or \f[C]sha1_command\f[R] are not set, it will +check if any of the default commands for each of them, as described +above, can be used. +The result will be saved in the remote configuration, so next time it +will use the same. +Value \f[C]none\f[R] will be set if none of the default commands could +be used for a specific algorithm, and this algorithm will not be +supported by the remote. +.PP +Disabling the checksumming may be required if you are connecting to SFTP +servers which are not under your control, and to which the execution of +remote shell commands is prohibited. +Set the configuration option \f[C]disable_hashcheck\f[R] to +\f[C]true\f[R] to disable checksumming entirely, or set +\f[C]shell_type\f[R] to \f[C]none\f[R] to disable all functionality +based on remote shell command execution. .SS Modified time .PP Modified times are stored on the server to 1 second precision. @@ -44630,9 +49550,26 @@ mod_sftp). If you are using one of these servers, you can set the option \f[C]set_modtime = false\f[R] in your RClone backend configuration to disable this behaviour. +.SS About command +.PP +The \f[C]about\f[R] command returns the total space, free space, and +used space on the remote for the disk of the specified path on the +remote or, if not set, the disk of the root on the remote. +.PP +SFTP usually supports the +about (https://rclone.org/commands/rclone_about/) command, but it +depends on the server. +If the server implements the vendor-specific VFS statistics extension, +which is normally the case with OpenSSH instances, it will be used. +If not, but the same login has access to a Unix shell, where the +\f[C]df\f[R] command is available (e.g. +in the remote\[aq]s PATH), then this will be used instead. +If the server shell is PowerShell, probably with a Windows OpenSSH +server, rclone will use a built-in shell command (see shell access). +If none of the above is applicable, \f[C]about\f[R] will fail. .SS Standard options .PP -Here are the standard options specific to sftp (SSH/SFTP Connection). +Here are the Standard options specific to sftp (SSH/SFTP). .SS --sftp-host .PP SSH host to connect to. @@ -44850,7 +49787,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to sftp (SSH/SFTP Connection). +Here are the Advanced options specific to sftp (SSH/SFTP). .SS --sftp-known-hosts-file .PP Optional path to known_hosts file. @@ -44897,12 +49834,13 @@ Type: bool Default: false .SS --sftp-path-override .PP -Override path used by SSH connection. +Override path used by SSH shell commands. .PP This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. .PP -Shared folders can be found in directories representing volumes +E.g. +if shared folders can be found in directories representing volumes: .IP .nf \f[C] @@ -44910,7 +49848,8 @@ rclone sync /home/local/directory remote:/directory --sftp-path-override /volume \f[R] .fi .PP -Home directory can be found in a shared folder called \[dq]home\[dq] +E.g. +if home directory can be found in a shared folder called \[dq]home\[dq]: .IP .nf \f[C] @@ -44940,6 +49879,49 @@ Env Var: RCLONE_SFTP_SET_MODTIME Type: bool .IP \[bu] 2 Default: true +.SS --sftp-shell-type +.PP +The type of SSH shell on remote server, if any. +.PP +Leave blank for autodetect. +.PP +Properties: +.IP \[bu] 2 +Config: shell_type +.IP \[bu] 2 +Env Var: RCLONE_SFTP_SHELL_TYPE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]none\[dq] +.RS 2 +.IP \[bu] 2 +No shell access +.RE +.IP \[bu] 2 +\[dq]unix\[dq] +.RS 2 +.IP \[bu] 2 +Unix shell +.RE +.IP \[bu] 2 +\[dq]powershell\[dq] +.RS 2 +.IP \[bu] 2 +PowerShell +.RE +.IP \[bu] 2 +\[dq]cmd\[dq] +.RS 2 +.IP \[bu] 2 +Windows Command Prompt +.RE +.RE .SS --sftp-md5sum-command .PP The command used to read md5 hashes. @@ -45099,28 +50081,92 @@ Env Var: RCLONE_SFTP_IDLE_TIMEOUT Type: Duration .IP \[bu] 2 Default: 1m0s +.SS --sftp-chunk-size +.PP +Upload and download chunk size. +.PP +This controls the maximum packet size used in the SFTP protocol. +The RFC limits this to 32768 bytes (32k), however a lot of servers +support larger sizes and setting it larger will increase transfer speed +dramatically on high latency links. +.PP +Only use a setting higher than 32k if you always connect to the same +server or after sufficiently broad testing. +.PP +For example using the value of 252k with OpenSSH works well with its +maximum packet size of 256k. +.PP +If you get the error \[dq]failed to send packet header: EOF\[dq] when +copying a large file, try lowering this number. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_SFTP_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 32Ki +.SS --sftp-concurrency +.PP +The maximum number of outstanding requests for one file +.PP +This controls the maximum number of outstanding requests for one file. +Increasing it will increase throughput on high latency links at the cost +of using more memory. +.PP +Properties: +.IP \[bu] 2 +Config: concurrency +.IP \[bu] 2 +Env Var: RCLONE_SFTP_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 64 +.SS --sftp-set-env +.PP +Environment variables to pass to sftp and commands +.PP +Set environment variables in the form: +.IP +.nf +\f[C] +VAR=value +\f[R] +.fi +.PP +to be passed to the sftp client and to any commands run (eg md5sum). +.PP +Pass multiple variables space separated, eg +.IP +.nf +\f[C] +VAR1=value VAR2=value +\f[R] +.fi +.PP +and pass variables with spaces in in quotes, eg +.IP +.nf +\f[C] +\[dq]VAR3=value with space\[dq] \[dq]VAR4=value with space\[dq] VAR5=nospacehere +\f[R] +.fi +.PP +Properties: +.IP \[bu] 2 +Config: set_env +.IP \[bu] 2 +Env Var: RCLONE_SFTP_SET_ENV +.IP \[bu] 2 +Type: SpaceSepList +.IP \[bu] 2 +Default: .SS Limitations .PP -SFTP supports checksums if the same login has shell access and -\f[C]md5sum\f[R] or \f[C]sha1sum\f[R] as well as \f[C]echo\f[R] are in -the remote\[aq]s PATH. -This remote checksumming (file hashing) is recommended and enabled by -default. -Disabling the checksumming may be required if you are connecting to SFTP -servers which are not under your control, and to which the execution of -remote commands is prohibited. -Set the configuration option \f[C]disable_hashcheck\f[R] to -\f[C]true\f[R] to disable checksumming. -.PP -SFTP also supports \f[C]about\f[R] if the same login has shell access -and \f[C]df\f[R] are in the remote\[aq]s PATH. -\f[C]about\f[R] will return the total space, free space, and used space -on the remote for the disk of the specified path on the remote or, if -not set, the disk of the root on the remote. -\f[C]about\f[R] will fail if it does not have shell access or if -\f[C]df\f[R] is not in the remote\[aq]s PATH. -.PP -Note that some SFTP servers (e.g. +On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can\[aq]t be calculated properly. For them using \f[C]disable_hashcheck\f[R] is a good idea. @@ -45140,22 +50186,22 @@ issue (https://github.com/pkg/sftp/issues/156) is fixed. .PP Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t work with it: \f[C]--dump-headers\f[R], \f[C]--dump-bodies\f[R], -\f[C]--dump-auth\f[R] +\f[C]--dump-auth\f[R]. .PP Note that \f[C]--timeout\f[R] and \f[C]--contimeout\f[R] are both supported. -.SS C14 -.PP -C14 is supported through the SFTP backend. -.PP -See C14\[aq]s -documentation (https://www.online.net/en/storage/c14-cold-storage) .SS rsync.net .PP rsync.net is supported through the SFTP backend. .PP See rsync.net\[aq]s documentation of rclone examples (https://www.rsync.net/products/rclone.html). +.SS Hetzner Storage Box +.PP +Hetzner Storage Boxes are supported through the SFTP backend on port 23. +.PP +See Hetzner\[aq]s documentation for +details (https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone) .SH Storj .PP Storj (https://storj.io) is an encrypted, secure, and cost-effective @@ -45431,7 +50477,7 @@ y/e/d> y .fi .SS Standard options .PP -Here are the standard options specific to storj (Storj Decentralized +Here are the Standard options specific to storj (Storj Decentralized Cloud Storage). .SS --storj-provider .PP @@ -45741,7 +50787,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SS Known issues .PP @@ -45904,7 +50950,7 @@ the config parameter \f[C]hard_delete = true\f[R] if you would like files to be deleted straight away. .SS Standard options .PP -Here are the standard options specific to sugarsync (Sugarsync). +Here are the Standard options specific to sugarsync (Sugarsync). .SS --sugarsync-app-id .PP Sugarsync App ID. @@ -45966,7 +51012,7 @@ Type: bool Default: false .SS Advanced options .PP -Here are the advanced options specific to sugarsync (Sugarsync). +Here are the Advanced options specific to sugarsync (Sugarsync). .SS --sugarsync-refresh-token .PP Sugarsync refresh token. @@ -46081,7 +51127,7 @@ rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member of an rclone union remote. .PP See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) See rclone +about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/) .SH Tardigrade .PP @@ -46229,7 +51275,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in XML strings. .SS Standard options .PP -Here are the standard options specific to uptobox (Uptobox). +Here are the Standard options specific to uptobox (Uptobox). .SS --uptobox-access-token .PP Your access token. @@ -46247,7 +51293,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to uptobox (Uptobox). +Here are the Advanced options specific to uptobox (Uptobox). .SS --uptobox-encoding .PP The encoding for the backend. @@ -46645,7 +51691,7 @@ T} .TE .SS Standard options .PP -Here are the standard options specific to union (Union merges the +Here are the Standard options specific to union (Union merges the contents of several upstream fs). .SS --union-upstreams .PP @@ -46717,6 +51763,31 @@ Env Var: RCLONE_UNION_CACHE_TIME Type: int .IP \[bu] 2 Default: 120 +.SS Advanced options +.PP +Here are the Advanced options specific to union (Union merges the +contents of several upstream fs). +.SS --union-min-free-space +.PP +Minimum viable free space for lfs/eplfs policies. +.PP +If a remote has less than this much free space then it won\[aq]t be +considered for use in lfs or eplfs policies. +.PP +Properties: +.IP \[bu] 2 +Config: min_free_space +.IP \[bu] 2 +Env Var: RCLONE_UNION_MIN_FREE_SPACE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 1Gi +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SH WebDAV .PP Paths are specified as \f[C]remote:path\f[R] @@ -46752,7 +51823,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Webdav +XX / WebDAV \[rs] \[dq]webdav\[dq] [snip] Storage> webdav @@ -46761,7 +51832,7 @@ Choose a number from below, or type in your own value 1 / Connect to example.com \[rs] \[dq]https://example.com\[dq] url> https://example.com/remote.php/webdav/ -Name of the Webdav site/service/software you are using +Name of the WebDAV site/service/software you are using Choose a number from below, or type in your own value 1 / Nextcloud \[rs] \[dq]nextcloud\[dq] @@ -46842,7 +51913,7 @@ appear on all objects, or only on objects which had a hash uploaded with them. .SS Standard options .PP -Here are the standard options specific to webdav (Webdav). +Here are the Standard options specific to webdav (WebDAV). .SS --webdav-url .PP URL of http host to connect to. @@ -46861,7 +51932,7 @@ Type: string Required: true .SS --webdav-vendor .PP -Name of the Webdav site/service/software you are using. +Name of the WebDAV site/service/software you are using. .PP Properties: .IP \[bu] 2 @@ -46954,7 +52025,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to webdav (Webdav). +Here are the Advanced options specific to webdav (WebDAV). .SS --webdav-bearer-token-command .PP Command to run to get a bearer token. @@ -47348,7 +52419,7 @@ replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t be used in JSON strings. .SS Standard options .PP -Here are the standard options specific to yandex (Yandex Disk). +Here are the Standard options specific to yandex (Yandex Disk). .SS --yandex-client-id .PP OAuth Client Id. @@ -47381,7 +52452,7 @@ Type: string Required: false .SS Advanced options .PP -Here are the advanced options specific to yandex (Yandex Disk). +Here are the Advanced options specific to yandex (Yandex Disk). .SS --yandex-token .PP OAuth Access Token as a JSON blob. @@ -47623,7 +52694,7 @@ In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload. .SS Standard options .PP -Here are the standard options specific to zoho (Zoho). +Here are the Standard options specific to zoho (Zoho). .SS --zoho-client-id .PP OAuth Client Id. @@ -47693,6 +52764,18 @@ Europe India .RE .IP \[bu] 2 +\[dq]jp\[dq] +.RS 2 +.IP \[bu] 2 +Japan +.RE +.IP \[bu] 2 +\[dq]com.cn\[dq] +.RS 2 +.IP \[bu] 2 +China +.RE +.IP \[bu] 2 \[dq]com.au\[dq] .RS 2 .IP \[bu] 2 @@ -47701,7 +52784,7 @@ Australia .RE .SS Advanced options .PP -Here are the advanced options specific to zoho (Zoho). +Here are the Advanced options specific to zoho (Zoho). .SS --zoho-token .PP OAuth Access Token as a JSON blob. @@ -47761,6 +52844,21 @@ Env Var: RCLONE_ZOHO_ENCODING Type: MultiEncoder .IP \[bu] 2 Default: Del,Ctl,InvalidUtf8 +.SS Setting up your own client_id +.PP +For Zoho we advise you to set up your own client_id. +To do so you have to complete the following steps. +.IP "1." 3 +Log in to the Zoho API Console (https://api-console.zoho.com) +.IP "2." 3 +Create a new client of type \[dq]Server-based Application\[dq]. +The name and website don\[aq]t matter, but you must add the redirect URL +\f[C]http://localhost:53682/\f[R]. +.IP "3." 3 +Once the client is created, you can go to the settings tab and enable it +in other regions. +.PP +The client id and client secret can now be used with rclone. .SH Local Filesystem .PP Local paths are specified as normal filesystem paths, e.g. @@ -48440,7 +53538,7 @@ On systems where it isn\[aq]t supported (e.g. Windows) it will be ignored. .SS Advanced options .PP -Here are the advanced options specific to local (Local Disk). +Here are the Advanced options specific to local (Local Disk). .SS --local-nounc .PP Disable UNC (long path names) conversion on Windows. @@ -48451,9 +53549,9 @@ Config: nounc .IP \[bu] 2 Env Var: RCLONE_LOCAL_NOUNC .IP \[bu] 2 -Type: string +Type: bool .IP \[bu] 2 -Required: false +Default: false .IP \[bu] 2 Examples: .RS 2 @@ -48720,6 +53818,115 @@ Env Var: RCLONE_LOCAL_ENCODING Type: MultiEncoder .IP \[bu] 2 Default: Slash,Dot +.SS Metadata +.PP +Depending on which OS is in use the local backend may return only some +of the system metadata. +Setting system metadata is supported on all OSes but setting user +metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. +It is \f[B]not\f[R] supported on Windows yet (see +pkg/attrs#47 (https://github.com/pkg/xattr/issues/47)). +.PP +User metadata is stored as extended attributes (which may not be +supported by all file systems) under the \[dq]user.*\[dq] prefix. +.PP +Here are the possible system metadata items for the local backend. +.PP +.TS +tab(@); +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). +T{ +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only +T} +_ +T{ +atime +T}@T{ +Time of last access +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T}@T{ +N +T} +T{ +btime +T}@T{ +Time of file birth (creation) +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T}@T{ +N +T} +T{ +gid +T}@T{ +Group ID of owner +T}@T{ +decimal number +T}@T{ +500 +T}@T{ +N +T} +T{ +mode +T}@T{ +File type and mode +T}@T{ +octal, unix style +T}@T{ +0100664 +T}@T{ +N +T} +T{ +mtime +T}@T{ +Time of last modification +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05.999999999Z07:00 +T}@T{ +N +T} +T{ +rdev +T}@T{ +Device ID (if special file) +T}@T{ +hexadecimal +T}@T{ +1abc +T}@T{ +N +T} +T{ +uid +T}@T{ +User ID of owner +T}@T{ +decimal number +T}@T{ +500 +T}@T{ +N +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SS Backend commands .PP Here are the commands specific to the local backend. @@ -48734,9 +53941,8 @@ rclone backend COMMAND remote: .PP The help below will explain what arguments each command takes. .PP -See the \[dq]rclone backend\[dq] -command (https://rclone.org/commands/rclone_backend/) for more info on -how to pass options and arguments. +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. .PP These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). @@ -48759,6 +53965,568 @@ Options: .IP \[bu] 2 \[dq]error\[dq]: return an error based on option value .SH Changelog +.SS v1.59.0 - 2022-07-09 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0) +.IP \[bu] 2 +New backends +.RS 2 +.IP \[bu] 2 +Combine multiple remotes in one directory tree (Nick Craig-Wood) +.IP \[bu] 2 +Hidrive (https://rclone.org/hidrive/) (Ovidiu Victor Tatar) +.IP \[bu] 2 +Internet Archive (https://rclone.org/internetarchive/) (Lesmiscore +(Naoya Ozaki)) +.IP \[bu] 2 +New S3 providers +.RS 2 +.IP \[bu] 2 +ArvanCloud AOS (https://rclone.org/s3/#arvan-cloud) (ehsantdy) +.IP \[bu] 2 +Cloudflare R2 (https://rclone.org/s3/#cloudflare-r2) (Nick Craig-Wood) +.IP \[bu] 2 +Huawei OBS (https://rclone.org/s3/#huawei-obs) (m00594701) +.IP \[bu] 2 +IDrive e2 (https://rclone.org/s3/#idrive-e2) (vyloy) +.RE +.RE +.IP \[bu] 2 +New commands +.RS 2 +.IP \[bu] 2 +test makefile (https://rclone.org/commands/rclone_test_makefile/): +Create a single file for testing (Nick Craig-Wood) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +Metadata framework (https://rclone.org/docs/#metadata) to read and write +system and user metadata on backends (Nick Craig-Wood) +.RS 2 +.IP \[bu] 2 +Implemented initially for \f[C]local\f[R], \f[C]s3\f[R] and +\f[C]internetarchive\f[R] backends +.IP \[bu] 2 +\f[C]--metadata\f[R]/\f[C]-M\f[R] flag to control whether metadata is +copied +.IP \[bu] 2 +\f[C]--metadata-set\f[R] flag to specify metadata for uploads +.IP \[bu] 2 +Thanks to Manz Solutions (https://manz-solutions.at/) for sponsoring +this work. +.RE +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Update to go1.18 and make go1.16 the minimum required version (Nick +Craig-Wood) +.IP \[bu] 2 +Update android go build to 1.18.x and NDK to 23.1.7779620 (Nick +Craig-Wood) +.IP \[bu] 2 +All windows binaries now no longer CGO (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]linux/arm/v6\f[R] to docker images (Nick Craig-Wood) +.IP \[bu] 2 +A huge number of fixes found with staticcheck (https://staticcheck.io/) +(albertony) +.IP \[bu] 2 +Configurable version suffix independent of version number (albertony) +.RE +.IP \[bu] 2 +check: Implement \f[C]--no-traverse\f[R] and +\f[C]--no-unicode-normalization\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +config: Readability improvements (albertony) +.IP \[bu] 2 +copyurl: Add \f[C]--header-filename\f[R] to honor the HTTP header +filename directive (J-P Treen) +.IP \[bu] 2 +filter: Allow multiple \f[C]--exclude-if-present\f[R] flags (albertony) +.IP \[bu] 2 +fshttp: Add \f[C]--disable-http-keep-alives\f[R] to disable HTTP Keep +Alives (Nick Craig-Wood) +.IP \[bu] 2 +install.sh +.RS 2 +.IP \[bu] 2 +Set the modes on the files and/or directories on macOS (Michael C +Tiernan - MIT-Research Computing Project) +.IP \[bu] 2 +Pre verify sudo authorization \f[C]-v\f[R] before calling curl. +(Michael C Tiernan - MIT-Research Computing Project) +.RE +.IP \[bu] 2 +lib/encoder: Add Semicolon encoding (Nick Craig-Wood) +.IP \[bu] 2 +lsf: Add metadata support with \f[C]M\f[R] flag (Nick Craig-Wood) +.IP \[bu] 2 +lsjson: Add \f[C]--metadata\f[R]/\f[C]-M\f[R] flag (Nick Craig-Wood) +.IP \[bu] 2 +ncdu +.RS 2 +.IP \[bu] 2 +Implement multi selection (CrossR) +.IP \[bu] 2 +Replace termbox with tcell\[aq]s termbox wrapper (eNV25) +.IP \[bu] 2 +Display correct path in delete confirmation dialog (Roberto Ricci) +.RE +.IP \[bu] 2 +operations +.RS 2 +.IP \[bu] 2 +Speed up hash checking by aborting the other hash if first returns +nothing (Nick Craig-Wood) +.IP \[bu] 2 +Use correct src/dst in some log messages (zzr93) +.RE +.IP \[bu] 2 +rcat: Check checksums by default like copy does (Nick Craig-Wood) +.IP \[bu] 2 +selfupdate: Replace deprecated \f[C]x/crypto/openpgp\f[R] package with +\f[C]ProtonMail/go-crypto\f[R] (albertony) +.IP \[bu] 2 +serve ftp: Check \f[C]--passive-port\f[R] arguments are correct (Nick +Craig-Wood) +.IP \[bu] 2 +size: Warn about inaccurate results when objects with unknown size +(albertony) +.IP \[bu] 2 +sync: Overlap check is now filter-sensitive so \f[C]--backup-dir\f[R] +can be in the root provided it is filtered (Nick) +.IP \[bu] 2 +test info: Check file name lengths using 1,2,3,4 byte unicode characters +(Nick Craig-Wood) +.IP \[bu] 2 +test makefile(s): \f[C]--sparse\f[R], \f[C]--zero\f[R], +\f[C]--pattern\f[R], \f[C]--ascii\f[R], \f[C]--chargen\f[R] flags to +control file contents (Nick Craig-Wood) +.IP \[bu] 2 +Make sure we call the \f[C]Shutdown\f[R] method on backends (Martin +Czygan) +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +accounting: Fix unknown length file transfers counting 3 transfers each +(buda) +.IP \[bu] 2 +ncdu: Fix issue where dir size is summed when file sizes are -1 +(albertony) +.IP \[bu] 2 +sync/copy/move +.RS 2 +.IP \[bu] 2 +Fix \f[C]--fast-list\f[R] \f[C]--create-empty-src-dirs\f[R] and +\f[C]--exclude\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Fix \f[C]--max-duration\f[R] and \f[C]--cutoff-mode soft\f[R] (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Fix fs cache unpin (Martin Czygan) +.IP \[bu] 2 +Set proper exit code for errors that are not low-level retried (e.g. +size/timestamp changing) (albertony) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Support \f[C]windows/arm64\f[R] (may still be problems - see +#5828 (https://github.com/rclone/rclone/issues/5828)) (Nick Craig-Wood) +.IP \[bu] 2 +Log IO errors at ERROR level (Nick Craig-Wood) +.IP \[bu] 2 +Ignore \f[C]_netdev\f[R] mount argument (Hugal31) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Add \f[C]--vfs-fast-fingerprint\f[R] for less accurate but faster +fingerprints (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]--vfs-disk-space-total-size\f[R] option to manually set the +total disk space (Claudio Maradonna) +.IP \[bu] 2 +vfscache: Fix fatal error: sync: unlock of unlocked mutex error (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Fix parsing of \f[C]--local-nounc\f[R] flag (Nick Craig-Wood) +.IP \[bu] 2 +Add Metadata support (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Support metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Calculate Chunksize/blocksize to stay below maxUploadParts (Leroy van +Logchem) +.IP \[bu] 2 +Use chunksize lib to determine chunksize dynamically (Derek Battams) +.IP \[bu] 2 +Case insensitive access tier (Rob Pickerill) +.IP \[bu] 2 +Allow remote emulator (azurite) (Lorenzo Maiorfi) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Add \f[C]--b2-version-at\f[R] flag to show file versions at time +specified (SwazRGB) +.IP \[bu] 2 +Use chunksize lib to determine chunksize dynamically (Derek Battams) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Mark as not supporting metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Support metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Make \f[C]backend config -o config\f[R] add a combined +\f[C]AllDrives:\f[R] remote (Nick Craig-Wood) +.IP \[bu] 2 +Make \f[C]--drive-shared-with-me\f[R] work with shared drives (Nick +Craig-Wood) +.IP \[bu] 2 +Add \f[C]--drive-resource-key\f[R] for accessing link-shared files (Nick +Craig-Wood) +.IP \[bu] 2 +Add backend commands \f[C]exportformats\f[R] and \f[C]importformats\f[R] +for debugging (Nick Craig-Wood) +.IP \[bu] 2 +Fix 404 errors on copy/server side copy objects from public folder (Nick +Craig-Wood) +.IP \[bu] 2 +Update Internal OAuth consent screen docs (Phil Shackleton) +.IP \[bu] 2 +Moved \f[C]root_folder_id\f[R] to advanced section (Abhiraj) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Migrate from deprecated api (m8rge) +.IP \[bu] 2 +Add logs to show when poll interval limits are exceeded (Nick +Craig-Wood) +.IP \[bu] 2 +Fix nil pointer exception on dropbox impersonate user not found (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Fichier +.RS 2 +.IP \[bu] 2 +Parse api error codes and them accordingly (buengese) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Add support for \f[C]disable_utf8\f[R] option (Jason Zheng) +.IP \[bu] 2 +Revert to upstream \f[C]github.com/jlaffaye/ftp\f[R] from our fork (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Add \f[C]--gcs-no-check-bucket\f[R] to minimise transactions and perms +(Nick Gooding) +.IP \[bu] 2 +Add \f[C]--gcs-decompress\f[R] flag to decompress gzip-encoded files +(Nick Craig-Wood) +.RS 2 +.IP \[bu] 2 +by default these will be downloaded compressed (which previously failed) +.RE +.RE +.IP \[bu] 2 +Hasher +.RS 2 +.IP \[bu] 2 +Support metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +HTTP +.RS 2 +.IP \[bu] 2 +Fix missing response when using custom auth handler (albertony) +.RE +.IP \[bu] 2 +Jottacloud +.RS 2 +.IP \[bu] 2 +Add support for upload to custom device and mountpoint (albertony) +.IP \[bu] 2 +Always store username in config and use it to avoid initial API request +(albertony) +.IP \[bu] 2 +Fix issue with server-side copy when destination is in trash (albertony) +.IP \[bu] 2 +Fix listing output of remote with special characters (albertony) +.RE +.IP \[bu] 2 +Mailru +.RS 2 +.IP \[bu] 2 +Fix timeout by using int instead of time.Duration for keeping number of +seconds (albertony) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Document using MEGAcmd to help with login failures (Art M. +Gallagher) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Implement \f[C]--poll-interval\f[R] for onedrive (Hugo Laloge) +.IP \[bu] 2 +Add access scopes option (Sven Gerber) +.RE +.IP \[bu] 2 +Opendrive +.RS 2 +.IP \[bu] 2 +Resolve lag and truncate bugs (Scott Grimes) +.RE +.IP \[bu] 2 +Pcloud +.RS 2 +.IP \[bu] 2 +Fix about with no free space left (buengese) +.IP \[bu] 2 +Fix cleanup (buengese) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Use PUT Object instead of presigned URLs to upload single part objects +(Nick Craig-Wood) +.IP \[bu] 2 +Backend restore command to skip non-GLACIER objects (Vincent Murphy) +.IP \[bu] 2 +Use chunksize lib to determine chunksize dynamically (Derek Battams) +.IP \[bu] 2 +Retry RequestTimeout errors (Nick Craig-Wood) +.IP \[bu] 2 +Implement reading and writing of metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Add support for about and hashsum on windows server (albertony) +.IP \[bu] 2 +Use vendor-specific VFS statistics extension for about if available +(albertony) +.IP \[bu] 2 +Add \f[C]--sftp-chunk-size\f[R] to control packets sizes for high +latency links (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]--sftp-concurrency\f[R] to improve high latency transfers (Nick +Craig-Wood) +.IP \[bu] 2 +Add \f[C]--sftp-set-env\f[R] option to set environment variables (Nick +Craig-Wood) +.IP \[bu] 2 +Add Hetzner Storage Boxes to supported sftp backends (Anthrazz) +.RE +.IP \[bu] 2 +Storj +.RS 2 +.IP \[bu] 2 +Fix put which lead to the file being unreadable when using mount (Erik +van Velzen) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Add \f[C]min_free_space\f[R] option for \f[C]lfs\f[R]/\f[C]eplfs\f[R] +policies (Nick Craig-Wood) +.IP \[bu] 2 +Fix uploading files to union of all bucket based remotes (Nick +Craig-Wood) +.IP \[bu] 2 +Fix get free space for remotes which don\[aq]t support it (Nick +Craig-Wood) +.IP \[bu] 2 +Fix \f[C]eplus\f[R] policy to select correct entry for existing files +(Nick Craig-Wood) +.IP \[bu] 2 +Support metadata (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Uptobox +.RS 2 +.IP \[bu] 2 +Fix root path handling (buengese) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Add SharePoint in other specific regions support (Noah Hsu) +.RE +.IP \[bu] 2 +Yandex +.RS 2 +.IP \[bu] 2 +Handle api error on server-side move (albertony) +.RE +.IP \[bu] 2 +Zoho +.RS 2 +.IP \[bu] 2 +Add Japan and China regions (buengese) +.RE +.SS v1.58.1 - 2022-04-29 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.58.0...v1.58.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build: Update github.com/billziss-gh to github.com/winfsp (Nick +Craig-Wood) +.IP \[bu] 2 +filter: Fix timezone of \f[C]--min-age\f[R]/\f[C]-max-age\f[R] from UTC +to local as documented (Nick Craig-Wood) +.IP \[bu] 2 +rc/js: Correct RC method names (S\[u01A1]n Tr\[u1EA7]n-Nguy\[u1EC5]n) +.IP \[bu] 2 +docs +.RS 2 +.IP \[bu] 2 +Fix some links to command pages (albertony) +.IP \[bu] 2 +Add \f[C]--multi-thread-streams\f[R] note to \f[C]--transfers\f[R]. +(Zsolt Ero) +.RE +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Fix \f[C]--devname\f[R] and fusermount: unknown option \[aq]fsname\[aq] +when mounting via rc (Nick Craig-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Remove wording which suggests VFS is only for mounting (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Fix retries of multipart uploads with incorrect_offset error (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Use the s3 pacer to speed up transactions (Nick Craig-Wood) +.IP \[bu] 2 +pacer: Default the Google pacer to a burst of 100 to fix gcs pacing +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +Jottacloud +.RS 2 +.IP \[bu] 2 +Fix scope in token request (albertony) +.RE +.IP \[bu] 2 +Netstorage +.RS 2 +.IP \[bu] 2 +Fix unescaped HTML in documentation (Nick Craig-Wood) +.IP \[bu] 2 +Make levels of headings consistent (Nick Craig-Wood) +.IP \[bu] 2 +Add support contacts to netstorage doc (Nil Alexandrov) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Note that sharepoint also changes web files (.html, .aspx) (GH) +.RE +.IP \[bu] 2 +Putio +.RS 2 +.IP \[bu] 2 +Handle rate limit errors (Berkan Teber) +.IP \[bu] 2 +Fix multithread download and other ranged requests (rafma0) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Add ChinaMobile EOS to provider list (GuoXingbin) +.IP \[bu] 2 +Sync providers in config description with providers (Nick Craig-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Fix OpenSSH 8.8+ RSA keys incompatibility (KARBOWSKI Piotr) +.IP \[bu] 2 +Note that Scaleway C14 is deprecating SFTP in favor of S3 (Adrien +Rey-Jarthon) +.RE +.IP \[bu] 2 +Storj +.RS 2 +.IP \[bu] 2 +Fix bucket creation on Move (Nick Craig-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Don\[aq]t override Referer if user sets it (Nick Craig-Wood) +.RE .SS v1.58.0 - 2022-03-18 .PP See commits (https://github.com/rclone/rclone/compare/v1.57.0...v1.58.0) @@ -58802,7 +64570,7 @@ node running rclone would need to have lots of bandwidth. .PP The syncs would be incremental (on a file by file basis). .PP -Eg +e.g. .IP .nf \f[C] @@ -58912,7 +64680,7 @@ export NO_PROXY=$no_proxy \f[R] .fi .PP -Note that the ftp backend does not support \f[C]ftp_proxy\f[R] yet. +Note that the FTP backend does not support \f[C]ftp_proxy\f[R] yet. .SS Rclone gives x509: failed to load system roots and no roots provided error .PP This means that \f[C]rclone\f[R] can\[aq]t find the SSL root @@ -60206,6 +65974,109 @@ Vincent Murphy ctrl-q <34975747+ctrl-q@users.noreply.github.com> .IP \[bu] 2 Nil Alexandrov +.IP \[bu] 2 +GuoXingbin <101376330+guoxingbin@users.noreply.github.com> +.IP \[bu] 2 +Berkan Teber +.IP \[bu] 2 +Tobias Klauser +.IP \[bu] 2 +KARBOWSKI Piotr +.IP \[bu] 2 +GH +.IP \[bu] 2 +rafma0 +.IP \[bu] 2 +Adrien Rey-Jarthon +.IP \[bu] 2 +Nick Gooding <73336146+nickgooding@users.noreply.github.com> +.IP \[bu] 2 +Leroy van Logchem +.IP \[bu] 2 +Zsolt Ero +.IP \[bu] 2 +Lesmiscore +.IP \[bu] 2 +ehsantdy +.IP \[bu] 2 +SwazRGB <65694696+swazrgb@users.noreply.github.com> +.IP \[bu] 2 +Mateusz Puczyn\[u0301]ski +.IP \[bu] 2 +Michael C Tiernan - MIT-Research Computing Project +.IP \[bu] 2 +Kaspian <34658474+KaspianDev@users.noreply.github.com> +.IP \[bu] 2 +Werner +.IP \[bu] 2 +Hugal31 +.IP \[bu] 2 +Christian Galo <36752715+cgalo5758@users.noreply.github.com> +.IP \[bu] 2 +Erik van Velzen +.IP \[bu] 2 +Derek Battams +.IP \[bu] 2 +SimonLiu +.IP \[bu] 2 +Hugo Laloge +.IP \[bu] 2 +Mr-Kanister <68117355+Mr-Kanister@users.noreply.github.com> +.IP \[bu] 2 +Rob Pickerill +.IP \[bu] 2 +Andrey +.IP \[bu] 2 +Eric Wolf <19wolf@gmail.com> +.IP \[bu] 2 +Nick +.IP \[bu] 2 +Jason Zheng +.IP \[bu] 2 +Matthew Vernon +.IP \[bu] 2 +Noah Hsu +.IP \[bu] 2 +m00594701 +.IP \[bu] 2 +Art M. +Gallagher +.IP \[bu] 2 +Sven Gerber <49589423+svengerber@users.noreply.github.com> +.IP \[bu] 2 +CrossR +.IP \[bu] 2 +Maciej Radzikowski +.IP \[bu] 2 +Scott Grimes +.IP \[bu] 2 +Phil Shackleton <71221528+philshacks@users.noreply.github.com> +.IP \[bu] 2 +eNV25 +.IP \[bu] 2 +Caleb +.IP \[bu] 2 +J-P Treen +.IP \[bu] 2 +Martin Czygan <53705+miku@users.noreply.github.com> +.IP \[bu] 2 +buda +.IP \[bu] 2 +mirekphd <36706320+mirekphd@users.noreply.github.com> +.IP \[bu] 2 +vyloy +.IP \[bu] 2 +Anthrazz <25553648+Anthrazz@users.noreply.github.com> +.IP \[bu] 2 +zzr93 <34027824+zzr93@users.noreply.github.com> +.IP \[bu] 2 +Paul Norman +.IP \[bu] 2 +Lorenzo Maiorfi +.IP \[bu] 2 +Claudio Maradonna +.IP \[bu] 2 +Ovidiu Victor Tatar .SH Contact the rclone project .SS Forum .PP